00:00:00.000 Started by upstream project "autotest-per-patch" build number 126251 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.107 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.148 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.199 Using shallow fetch with depth 1 00:00:00.199 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.199 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.318 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.329 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.341 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.341 > git config core.sparsecheckout # timeout=10 00:00:04.351 > git read-tree -mu HEAD # timeout=10 00:00:04.369 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.392 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.392 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.496 [Pipeline] Start of Pipeline 00:00:04.511 [Pipeline] library 00:00:04.512 Loading library shm_lib@master 00:00:04.512 Library shm_lib@master is cached. Copying from home. 00:00:04.527 [Pipeline] node 00:00:04.537 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:04.539 [Pipeline] { 00:00:04.548 [Pipeline] catchError 00:00:04.549 [Pipeline] { 00:00:04.559 [Pipeline] wrap 00:00:04.566 [Pipeline] { 00:00:04.572 [Pipeline] stage 00:00:04.574 [Pipeline] { (Prologue) 00:00:04.590 [Pipeline] echo 00:00:04.591 Node: VM-host-WFP1 00:00:04.596 [Pipeline] cleanWs 00:00:04.604 [WS-CLEANUP] Deleting project workspace... 00:00:04.604 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.610 [WS-CLEANUP] done 00:00:04.778 [Pipeline] setCustomBuildProperty 00:00:04.850 [Pipeline] httpRequest 00:00:04.866 [Pipeline] echo 00:00:04.867 Sorcerer 10.211.164.101 is alive 00:00:04.875 [Pipeline] httpRequest 00:00:04.879 HttpMethod: GET 00:00:04.880 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.880 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.892 Response Code: HTTP/1.1 200 OK 00:00:04.892 Success: Status code 200 is in the accepted range: 200,404 00:00:04.892 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.295 [Pipeline] sh 00:00:07.573 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.593 [Pipeline] httpRequest 00:00:07.615 [Pipeline] echo 00:00:07.617 Sorcerer 10.211.164.101 is alive 00:00:07.627 [Pipeline] httpRequest 00:00:07.634 HttpMethod: GET 00:00:07.635 URL: http://10.211.164.101/packages/spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:07.635 Sending request to url: http://10.211.164.101/packages/spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:07.641 Response Code: HTTP/1.1 200 OK 00:00:07.642 Success: Status code 200 is in the accepted range: 200,404 00:00:07.642 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:54.536 [Pipeline] sh 00:00:54.816 + tar --no-same-owner -xf spdk_0663932f504f7e873432b6fb363ab180df70f8a0.tar.gz 00:00:57.368 [Pipeline] sh 00:00:57.671 + git -C spdk log --oneline -n5 00:00:57.671 0663932f5 util: add spdk_net_getaddr 00:00:57.671 9da437b46 util: move module/sock/sock_kernel.h contents to net.c 00:00:57.671 35c6d81e6 util: add spdk_net_get_interface_name 00:00:57.672 f8598a71f bdev/uring: use util functions in bdev_uring_check_zoned_support 00:00:57.672 4903ec649 ublk: use spdk_read_sysfs_attribute_uint32 to get max ublks 00:00:57.691 [Pipeline] writeFile 00:00:57.726 [Pipeline] sh 00:00:58.021 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:58.032 [Pipeline] sh 00:00:58.310 + cat autorun-spdk.conf 00:00:58.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.310 SPDK_TEST_NVMF=1 00:00:58.310 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.310 SPDK_TEST_URING=1 00:00:58.310 SPDK_TEST_USDT=1 00:00:58.310 SPDK_RUN_UBSAN=1 00:00:58.310 NET_TYPE=virt 00:00:58.310 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.316 RUN_NIGHTLY=0 00:00:58.318 [Pipeline] } 00:00:58.331 [Pipeline] // stage 00:00:58.343 [Pipeline] stage 00:00:58.345 [Pipeline] { (Run VM) 00:00:58.354 [Pipeline] sh 00:00:58.635 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.635 + echo 'Start stage prepare_nvme.sh' 00:00:58.635 Start stage prepare_nvme.sh 00:00:58.635 + [[ -n 4 ]] 00:00:58.635 + disk_prefix=ex4 00:00:58.635 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:00:58.635 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:00:58.635 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:00:58.635 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.635 ++ SPDK_TEST_NVMF=1 00:00:58.635 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.635 ++ SPDK_TEST_URING=1 00:00:58.635 ++ SPDK_TEST_USDT=1 00:00:58.635 ++ SPDK_RUN_UBSAN=1 00:00:58.635 ++ NET_TYPE=virt 00:00:58.635 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.635 ++ RUN_NIGHTLY=0 00:00:58.635 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:58.635 + nvme_files=() 00:00:58.635 + declare -A nvme_files 00:00:58.635 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.635 + nvme_files['nvme.img']=5G 00:00:58.635 + nvme_files['nvme-cmb.img']=5G 00:00:58.635 + nvme_files['nvme-multi0.img']=4G 00:00:58.635 + nvme_files['nvme-multi1.img']=4G 00:00:58.635 + nvme_files['nvme-multi2.img']=4G 00:00:58.635 + nvme_files['nvme-openstack.img']=8G 00:00:58.635 + nvme_files['nvme-zns.img']=5G 00:00:58.635 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.635 + (( SPDK_TEST_FTL == 1 )) 00:00:58.635 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.635 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.635 + for nvme in "${!nvme_files[@]}" 00:00:58.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:58.635 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.635 + for nvme in "${!nvme_files[@]}" 00:00:58.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:58.635 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.635 + for nvme in "${!nvme_files[@]}" 00:00:58.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:58.635 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:58.635 + for nvme in "${!nvme_files[@]}" 00:00:58.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:59.570 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.570 + for nvme in "${!nvme_files[@]}" 00:00:59.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:59.570 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.570 + for nvme in "${!nvme_files[@]}" 00:00:59.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:59.570 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.570 + for nvme in "${!nvme_files[@]}" 00:00:59.570 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:00.136 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.136 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:00.136 + echo 'End stage prepare_nvme.sh' 00:01:00.136 End stage prepare_nvme.sh 00:01:00.147 [Pipeline] sh 00:01:00.427 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:00.427 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:00.427 00:01:00.427 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:01:00.427 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:01:00.427 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:00.427 HELP=0 00:01:00.427 DRY_RUN=0 00:01:00.427 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:00.427 NVME_DISKS_TYPE=nvme,nvme, 00:01:00.427 NVME_AUTO_CREATE=0 00:01:00.427 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:00.427 NVME_CMB=,, 00:01:00.427 NVME_PMR=,, 00:01:00.427 NVME_ZNS=,, 00:01:00.427 NVME_MS=,, 00:01:00.427 NVME_FDP=,, 00:01:00.427 SPDK_VAGRANT_DISTRO=fedora38 00:01:00.428 SPDK_VAGRANT_VMCPU=10 00:01:00.428 SPDK_VAGRANT_VMRAM=12288 00:01:00.428 SPDK_VAGRANT_PROVIDER=libvirt 00:01:00.428 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:00.428 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:00.428 SPDK_OPENSTACK_NETWORK=0 00:01:00.428 VAGRANT_PACKAGE_BOX=0 00:01:00.428 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:00.428 FORCE_DISTRO=true 00:01:00.428 VAGRANT_BOX_VERSION= 00:01:00.428 EXTRA_VAGRANTFILES= 00:01:00.428 NIC_MODEL=e1000 00:01:00.428 00:01:00.428 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt' 00:01:00.428 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:02.982 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.358 ==> default: Creating image (snapshot of base box volume). 00:01:04.358 ==> default: Creating domain with the following settings... 00:01:04.358 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721078076_83d5862b562fe5c4f568 00:01:04.358 ==> default: -- Domain type: kvm 00:01:04.358 ==> default: -- Cpus: 10 00:01:04.358 ==> default: -- Feature: acpi 00:01:04.358 ==> default: -- Feature: apic 00:01:04.358 ==> default: -- Feature: pae 00:01:04.358 ==> default: -- Memory: 12288M 00:01:04.358 ==> default: -- Memory Backing: hugepages: 00:01:04.358 ==> default: -- Management MAC: 00:01:04.358 ==> default: -- Loader: 00:01:04.358 ==> default: -- Nvram: 00:01:04.358 ==> default: -- Base box: spdk/fedora38 00:01:04.358 ==> default: -- Storage pool: default 00:01:04.358 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721078076_83d5862b562fe5c4f568.img (20G) 00:01:04.358 ==> default: -- Volume Cache: default 00:01:04.358 ==> default: -- Kernel: 00:01:04.358 ==> default: -- Initrd: 00:01:04.358 ==> default: -- Graphics Type: vnc 00:01:04.358 ==> default: -- Graphics Port: -1 00:01:04.358 ==> default: -- Graphics IP: 127.0.0.1 00:01:04.358 ==> default: -- Graphics Password: Not defined 00:01:04.358 ==> default: -- Video Type: cirrus 00:01:04.358 ==> default: -- Video VRAM: 9216 00:01:04.358 ==> default: -- Sound Type: 00:01:04.358 ==> default: -- Keymap: en-us 00:01:04.358 ==> default: -- TPM Path: 00:01:04.358 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:04.358 ==> default: -- Command line args: 00:01:04.358 ==> default: -> value=-device, 00:01:04.358 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:04.358 ==> default: -> value=-drive, 00:01:04.358 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:04.358 ==> default: -> value=-device, 00:01:04.358 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.358 ==> default: -> value=-device, 00:01:04.358 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:04.358 ==> default: -> value=-drive, 00:01:04.358 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:04.359 ==> default: -> value=-device, 00:01:04.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.359 ==> default: -> value=-drive, 00:01:04.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:04.359 ==> default: -> value=-device, 00:01:04.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.359 ==> default: -> value=-drive, 00:01:04.359 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:04.359 ==> default: -> value=-device, 00:01:04.359 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.672 ==> default: Creating shared folders metadata... 00:01:04.672 ==> default: Starting domain. 00:01:06.045 ==> default: Waiting for domain to get an IP address... 00:01:24.119 ==> default: Waiting for SSH to become available... 00:01:24.119 ==> default: Configuring and enabling network interfaces... 00:01:28.302 default: SSH address: 192.168.121.233:22 00:01:28.302 default: SSH username: vagrant 00:01:28.302 default: SSH auth method: private key 00:01:31.601 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:39.716 ==> default: Mounting SSHFS shared folder... 00:01:41.618 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.618 ==> default: Checking Mount.. 00:01:43.015 ==> default: Folder Successfully Mounted! 00:01:43.015 ==> default: Running provisioner: file... 00:01:44.392 default: ~/.gitconfig => .gitconfig 00:01:44.670 00:01:44.670 SUCCESS! 00:01:44.670 00:01:44.670 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:44.670 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.670 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:44.670 00:01:44.713 [Pipeline] } 00:01:44.727 [Pipeline] // stage 00:01:44.736 [Pipeline] dir 00:01:44.736 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt 00:01:44.737 [Pipeline] { 00:01:44.747 [Pipeline] catchError 00:01:44.749 [Pipeline] { 00:01:44.763 [Pipeline] sh 00:01:45.037 + vagrant ssh-config --host vagrant 00:01:45.037 + sed -ne /^Host/,$p 00:01:45.037 + tee ssh_conf 00:01:48.352 Host vagrant 00:01:48.352 HostName 192.168.121.233 00:01:48.352 User vagrant 00:01:48.352 Port 22 00:01:48.352 UserKnownHostsFile /dev/null 00:01:48.352 StrictHostKeyChecking no 00:01:48.352 PasswordAuthentication no 00:01:48.352 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:48.352 IdentitiesOnly yes 00:01:48.352 LogLevel FATAL 00:01:48.352 ForwardAgent yes 00:01:48.352 ForwardX11 yes 00:01:48.352 00:01:48.392 [Pipeline] withEnv 00:01:48.395 [Pipeline] { 00:01:48.410 [Pipeline] sh 00:01:48.689 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:48.689 source /etc/os-release 00:01:48.689 [[ -e /image.version ]] && img=$(< /image.version) 00:01:48.690 # Minimal, systemd-like check. 00:01:48.690 if [[ -e /.dockerenv ]]; then 00:01:48.690 # Clear garbage from the node's name: 00:01:48.690 # agt-er_autotest_547-896 -> autotest_547-896 00:01:48.690 # $HOSTNAME is the actual container id 00:01:48.690 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:48.690 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:48.690 # We can assume this is a mount from a host where container is running, 00:01:48.690 # so fetch its hostname to easily identify the target swarm worker. 00:01:48.690 container="$(< /etc/hostname) ($agent)" 00:01:48.690 else 00:01:48.690 # Fallback 00:01:48.690 container=$agent 00:01:48.690 fi 00:01:48.690 fi 00:01:48.690 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:48.690 00:01:48.959 [Pipeline] } 00:01:48.978 [Pipeline] // withEnv 00:01:48.985 [Pipeline] setCustomBuildProperty 00:01:48.999 [Pipeline] stage 00:01:49.000 [Pipeline] { (Tests) 00:01:49.015 [Pipeline] sh 00:01:49.298 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.570 [Pipeline] sh 00:01:49.852 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:50.159 [Pipeline] timeout 00:01:50.160 Timeout set to expire in 30 min 00:01:50.162 [Pipeline] { 00:01:50.175 [Pipeline] sh 00:01:50.453 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:51.019 HEAD is now at 0663932f5 util: add spdk_net_getaddr 00:01:51.032 [Pipeline] sh 00:01:51.430 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.704 [Pipeline] sh 00:01:51.986 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:52.260 [Pipeline] sh 00:01:52.543 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:52.802 ++ readlink -f spdk_repo 00:01:52.802 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.802 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.802 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.802 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.802 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.802 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.802 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.802 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:52.802 + cd /home/vagrant/spdk_repo 00:01:52.802 + source /etc/os-release 00:01:52.802 ++ NAME='Fedora Linux' 00:01:52.802 ++ VERSION='38 (Cloud Edition)' 00:01:52.802 ++ ID=fedora 00:01:52.802 ++ VERSION_ID=38 00:01:52.802 ++ VERSION_CODENAME= 00:01:52.802 ++ PLATFORM_ID=platform:f38 00:01:52.802 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:52.802 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.802 ++ LOGO=fedora-logo-icon 00:01:52.802 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:52.802 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.802 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:52.802 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.802 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.802 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.802 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:52.802 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.802 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:52.802 ++ SUPPORT_END=2024-05-14 00:01:52.802 ++ VARIANT='Cloud Edition' 00:01:52.802 ++ VARIANT_ID=cloud 00:01:52.802 + uname -a 00:01:52.802 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:52.802 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:53.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:53.370 Hugepages 00:01:53.370 node hugesize free / total 00:01:53.370 node0 1048576kB 0 / 0 00:01:53.370 node0 2048kB 0 / 0 00:01:53.370 00:01:53.370 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.370 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:53.370 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:53.370 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:53.370 + rm -f /tmp/spdk-ld-path 00:01:53.370 + source autorun-spdk.conf 00:01:53.370 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.370 ++ SPDK_TEST_NVMF=1 00:01:53.370 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.370 ++ SPDK_TEST_URING=1 00:01:53.370 ++ SPDK_TEST_USDT=1 00:01:53.370 ++ SPDK_RUN_UBSAN=1 00:01:53.370 ++ NET_TYPE=virt 00:01:53.370 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.370 ++ RUN_NIGHTLY=0 00:01:53.370 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.370 + [[ -n '' ]] 00:01:53.370 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:53.629 + for M in /var/spdk/build-*-manifest.txt 00:01:53.629 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.629 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.629 + for M in /var/spdk/build-*-manifest.txt 00:01:53.629 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.629 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:53.629 ++ uname 00:01:53.629 + [[ Linux == \L\i\n\u\x ]] 00:01:53.629 + sudo dmesg -T 00:01:53.629 + sudo dmesg --clear 00:01:53.629 + sudo dmesg -Tw 00:01:53.629 + dmesg_pid=5106 00:01:53.629 + [[ Fedora Linux == FreeBSD ]] 00:01:53.629 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.629 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.629 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.629 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.629 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.629 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.629 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.629 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.629 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.629 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.629 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.629 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.629 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.629 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.629 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:53.629 Test configuration: 00:01:53.629 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.629 SPDK_TEST_NVMF=1 00:01:53.629 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.629 SPDK_TEST_URING=1 00:01:53.629 SPDK_TEST_USDT=1 00:01:53.629 SPDK_RUN_UBSAN=1 00:01:53.629 NET_TYPE=virt 00:01:53.629 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.629 RUN_NIGHTLY=0 21:15:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:53.629 21:15:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.629 21:15:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.629 21:15:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.629 21:15:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.629 21:15:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.629 21:15:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.629 21:15:26 -- paths/export.sh@5 -- $ export PATH 00:01:53.629 21:15:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.629 21:15:26 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:53.629 21:15:26 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:53.629 21:15:26 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721078126.XXXXXX 00:01:53.629 21:15:26 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721078126.rP09IG 00:01:53.629 21:15:26 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:53.629 21:15:26 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:53.629 21:15:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:53.629 21:15:26 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:53.629 21:15:26 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.629 21:15:26 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:53.629 21:15:26 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:53.630 21:15:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.888 21:15:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:53.888 21:15:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:53.888 21:15:27 -- pm/common@17 -- $ local monitor 00:01:53.888 21:15:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.888 21:15:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.888 21:15:27 -- pm/common@25 -- $ sleep 1 00:01:53.888 21:15:27 -- pm/common@21 -- $ date +%s 00:01:53.888 21:15:27 -- pm/common@21 -- $ date +%s 00:01:53.888 21:15:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721078127 00:01:53.888 21:15:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721078127 00:01:53.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721078127_collect-vmstat.pm.log 00:01:53.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721078127_collect-cpu-load.pm.log 00:01:54.823 21:15:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:54.823 21:15:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.823 21:15:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.823 21:15:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.823 21:15:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.823 Mon Jul 15 09:15:28 PM UTC 2024 00:01:54.823 21:15:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.823 v24.09-pre-217-g0663932f5 00:01:54.823 21:15:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:54.823 21:15:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.823 21:15:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.823 21:15:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:54.823 21:15:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.823 21:15:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.823 ************************************ 00:01:54.823 START TEST ubsan 00:01:54.823 ************************************ 00:01:54.823 using ubsan 00:01:54.823 21:15:28 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:54.823 00:01:54.823 real 0m0.001s 00:01:54.823 user 0m0.000s 00:01:54.823 sys 0m0.000s 00:01:54.823 21:15:28 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:54.823 21:15:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.823 ************************************ 00:01:54.823 END TEST ubsan 00:01:54.823 ************************************ 00:01:54.823 21:15:28 -- common/autotest_common.sh@1142 -- $ return 0 00:01:54.823 21:15:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:54.823 21:15:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:54.823 21:15:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:54.823 21:15:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:54.823 21:15:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:54.823 21:15:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:54.823 21:15:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:54.823 21:15:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:54.823 21:15:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:55.081 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:55.081 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:55.648 Using 'verbs' RDMA provider 00:02:11.506 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:26.377 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:26.635 Creating mk/config.mk...done. 00:02:26.635 Creating mk/cc.flags.mk...done. 00:02:26.635 Type 'make' to build. 00:02:26.635 21:15:59 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:26.635 21:15:59 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:26.635 21:15:59 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:26.635 21:15:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.635 ************************************ 00:02:26.635 START TEST make 00:02:26.635 ************************************ 00:02:26.635 21:15:59 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:27.202 make[1]: Nothing to be done for 'all'. 00:02:37.176 The Meson build system 00:02:37.176 Version: 1.3.1 00:02:37.176 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:37.176 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:37.176 Build type: native build 00:02:37.176 Program cat found: YES (/usr/bin/cat) 00:02:37.176 Project name: DPDK 00:02:37.176 Project version: 24.03.0 00:02:37.176 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.176 C linker for the host machine: cc ld.bfd 2.39-16 00:02:37.176 Host machine cpu family: x86_64 00:02:37.176 Host machine cpu: x86_64 00:02:37.176 Message: ## Building in Developer Mode ## 00:02:37.176 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.176 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:37.176 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.176 Program python3 found: YES (/usr/bin/python3) 00:02:37.176 Program cat found: YES (/usr/bin/cat) 00:02:37.176 Compiler for C supports arguments -march=native: YES 00:02:37.176 Checking for size of "void *" : 8 00:02:37.176 Checking for size of "void *" : 8 (cached) 00:02:37.176 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:37.176 Library m found: YES 00:02:37.176 Library numa found: YES 00:02:37.176 Has header "numaif.h" : YES 00:02:37.176 Library fdt found: NO 00:02:37.176 Library execinfo found: NO 00:02:37.176 Has header "execinfo.h" : YES 00:02:37.176 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.176 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.176 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.176 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.176 Run-time dependency openssl found: YES 3.0.9 00:02:37.176 Run-time dependency libpcap found: YES 1.10.4 00:02:37.176 Has header "pcap.h" with dependency libpcap: YES 00:02:37.176 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.176 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.176 Compiler for C supports arguments -Wformat: YES 00:02:37.176 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.176 Compiler for C supports arguments -Wformat-security: NO 00:02:37.176 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.176 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.176 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.176 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.176 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.176 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.176 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.176 Compiler for C supports arguments -Wundef: YES 00:02:37.176 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.176 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.176 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.176 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.176 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.176 Program objdump found: YES (/usr/bin/objdump) 00:02:37.176 Compiler for C supports arguments -mavx512f: YES 00:02:37.176 Checking if "AVX512 checking" compiles: YES 00:02:37.176 Fetching value of define "__SSE4_2__" : 1 00:02:37.176 Fetching value of define "__AES__" : 1 00:02:37.176 Fetching value of define "__AVX__" : 1 00:02:37.176 Fetching value of define "__AVX2__" : 1 00:02:37.176 Fetching value of define "__AVX512BW__" : 1 00:02:37.176 Fetching value of define "__AVX512CD__" : 1 00:02:37.176 Fetching value of define "__AVX512DQ__" : 1 00:02:37.176 Fetching value of define "__AVX512F__" : 1 00:02:37.176 Fetching value of define "__AVX512VL__" : 1 00:02:37.176 Fetching value of define "__PCLMUL__" : 1 00:02:37.176 Fetching value of define "__RDRND__" : 1 00:02:37.176 Fetching value of define "__RDSEED__" : 1 00:02:37.176 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.176 Fetching value of define "__znver1__" : (undefined) 00:02:37.176 Fetching value of define "__znver2__" : (undefined) 00:02:37.176 Fetching value of define "__znver3__" : (undefined) 00:02:37.176 Fetching value of define "__znver4__" : (undefined) 00:02:37.176 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.176 Message: lib/log: Defining dependency "log" 00:02:37.176 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.176 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.176 Checking for function "getentropy" : NO 00:02:37.176 Message: lib/eal: Defining dependency "eal" 00:02:37.176 Message: lib/ring: Defining dependency "ring" 00:02:37.176 Message: lib/rcu: Defining dependency "rcu" 00:02:37.176 Message: lib/mempool: Defining dependency "mempool" 00:02:37.176 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.176 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.176 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.176 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.176 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.176 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.176 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.176 Compiler for C supports arguments -mpclmul: YES 00:02:37.176 Compiler for C supports arguments -maes: YES 00:02:37.176 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.176 Compiler for C supports arguments -mavx512bw: YES 00:02:37.176 Compiler for C supports arguments -mavx512dq: YES 00:02:37.176 Compiler for C supports arguments -mavx512vl: YES 00:02:37.177 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.177 Compiler for C supports arguments -mavx2: YES 00:02:37.177 Compiler for C supports arguments -mavx: YES 00:02:37.177 Message: lib/net: Defining dependency "net" 00:02:37.177 Message: lib/meter: Defining dependency "meter" 00:02:37.177 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.177 Message: lib/pci: Defining dependency "pci" 00:02:37.177 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.177 Message: lib/hash: Defining dependency "hash" 00:02:37.177 Message: lib/timer: Defining dependency "timer" 00:02:37.177 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.177 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.177 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.177 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.177 Message: lib/power: Defining dependency "power" 00:02:37.177 Message: lib/reorder: Defining dependency "reorder" 00:02:37.177 Message: lib/security: Defining dependency "security" 00:02:37.177 Has header "linux/userfaultfd.h" : YES 00:02:37.177 Has header "linux/vduse.h" : YES 00:02:37.177 Message: lib/vhost: Defining dependency "vhost" 00:02:37.177 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.177 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.177 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.177 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.177 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:37.177 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:37.177 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:37.177 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:37.177 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:37.177 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:37.177 Program doxygen found: YES (/usr/bin/doxygen) 00:02:37.177 Configuring doxy-api-html.conf using configuration 00:02:37.177 Configuring doxy-api-man.conf using configuration 00:02:37.177 Program mandb found: YES (/usr/bin/mandb) 00:02:37.177 Program sphinx-build found: NO 00:02:37.177 Configuring rte_build_config.h using configuration 00:02:37.177 Message: 00:02:37.177 ================= 00:02:37.177 Applications Enabled 00:02:37.177 ================= 00:02:37.177 00:02:37.177 apps: 00:02:37.177 00:02:37.177 00:02:37.177 Message: 00:02:37.177 ================= 00:02:37.177 Libraries Enabled 00:02:37.177 ================= 00:02:37.177 00:02:37.177 libs: 00:02:37.177 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:37.177 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:37.177 cryptodev, dmadev, power, reorder, security, vhost, 00:02:37.177 00:02:37.177 Message: 00:02:37.177 =============== 00:02:37.177 Drivers Enabled 00:02:37.177 =============== 00:02:37.177 00:02:37.177 common: 00:02:37.177 00:02:37.177 bus: 00:02:37.177 pci, vdev, 00:02:37.177 mempool: 00:02:37.177 ring, 00:02:37.177 dma: 00:02:37.177 00:02:37.177 net: 00:02:37.177 00:02:37.177 crypto: 00:02:37.177 00:02:37.177 compress: 00:02:37.177 00:02:37.177 vdpa: 00:02:37.177 00:02:37.177 00:02:37.177 Message: 00:02:37.177 ================= 00:02:37.177 Content Skipped 00:02:37.177 ================= 00:02:37.177 00:02:37.177 apps: 00:02:37.177 dumpcap: explicitly disabled via build config 00:02:37.177 graph: explicitly disabled via build config 00:02:37.177 pdump: explicitly disabled via build config 00:02:37.177 proc-info: explicitly disabled via build config 00:02:37.177 test-acl: explicitly disabled via build config 00:02:37.177 test-bbdev: explicitly disabled via build config 00:02:37.177 test-cmdline: explicitly disabled via build config 00:02:37.177 test-compress-perf: explicitly disabled via build config 00:02:37.177 test-crypto-perf: explicitly disabled via build config 00:02:37.177 test-dma-perf: explicitly disabled via build config 00:02:37.177 test-eventdev: explicitly disabled via build config 00:02:37.177 test-fib: explicitly disabled via build config 00:02:37.177 test-flow-perf: explicitly disabled via build config 00:02:37.177 test-gpudev: explicitly disabled via build config 00:02:37.177 test-mldev: explicitly disabled via build config 00:02:37.177 test-pipeline: explicitly disabled via build config 00:02:37.177 test-pmd: explicitly disabled via build config 00:02:37.177 test-regex: explicitly disabled via build config 00:02:37.177 test-sad: explicitly disabled via build config 00:02:37.177 test-security-perf: explicitly disabled via build config 00:02:37.177 00:02:37.177 libs: 00:02:37.177 argparse: explicitly disabled via build config 00:02:37.177 metrics: explicitly disabled via build config 00:02:37.177 acl: explicitly disabled via build config 00:02:37.177 bbdev: explicitly disabled via build config 00:02:37.177 bitratestats: explicitly disabled via build config 00:02:37.177 bpf: explicitly disabled via build config 00:02:37.177 cfgfile: explicitly disabled via build config 00:02:37.177 distributor: explicitly disabled via build config 00:02:37.177 efd: explicitly disabled via build config 00:02:37.177 eventdev: explicitly disabled via build config 00:02:37.177 dispatcher: explicitly disabled via build config 00:02:37.177 gpudev: explicitly disabled via build config 00:02:37.177 gro: explicitly disabled via build config 00:02:37.177 gso: explicitly disabled via build config 00:02:37.177 ip_frag: explicitly disabled via build config 00:02:37.177 jobstats: explicitly disabled via build config 00:02:37.177 latencystats: explicitly disabled via build config 00:02:37.177 lpm: explicitly disabled via build config 00:02:37.177 member: explicitly disabled via build config 00:02:37.177 pcapng: explicitly disabled via build config 00:02:37.177 rawdev: explicitly disabled via build config 00:02:37.177 regexdev: explicitly disabled via build config 00:02:37.177 mldev: explicitly disabled via build config 00:02:37.177 rib: explicitly disabled via build config 00:02:37.177 sched: explicitly disabled via build config 00:02:37.177 stack: explicitly disabled via build config 00:02:37.177 ipsec: explicitly disabled via build config 00:02:37.177 pdcp: explicitly disabled via build config 00:02:37.177 fib: explicitly disabled via build config 00:02:37.177 port: explicitly disabled via build config 00:02:37.177 pdump: explicitly disabled via build config 00:02:37.177 table: explicitly disabled via build config 00:02:37.177 pipeline: explicitly disabled via build config 00:02:37.177 graph: explicitly disabled via build config 00:02:37.177 node: explicitly disabled via build config 00:02:37.177 00:02:37.177 drivers: 00:02:37.177 common/cpt: not in enabled drivers build config 00:02:37.177 common/dpaax: not in enabled drivers build config 00:02:37.177 common/iavf: not in enabled drivers build config 00:02:37.177 common/idpf: not in enabled drivers build config 00:02:37.177 common/ionic: not in enabled drivers build config 00:02:37.177 common/mvep: not in enabled drivers build config 00:02:37.177 common/octeontx: not in enabled drivers build config 00:02:37.177 bus/auxiliary: not in enabled drivers build config 00:02:37.177 bus/cdx: not in enabled drivers build config 00:02:37.177 bus/dpaa: not in enabled drivers build config 00:02:37.177 bus/fslmc: not in enabled drivers build config 00:02:37.177 bus/ifpga: not in enabled drivers build config 00:02:37.177 bus/platform: not in enabled drivers build config 00:02:37.177 bus/uacce: not in enabled drivers build config 00:02:37.177 bus/vmbus: not in enabled drivers build config 00:02:37.177 common/cnxk: not in enabled drivers build config 00:02:37.177 common/mlx5: not in enabled drivers build config 00:02:37.177 common/nfp: not in enabled drivers build config 00:02:37.177 common/nitrox: not in enabled drivers build config 00:02:37.177 common/qat: not in enabled drivers build config 00:02:37.177 common/sfc_efx: not in enabled drivers build config 00:02:37.177 mempool/bucket: not in enabled drivers build config 00:02:37.177 mempool/cnxk: not in enabled drivers build config 00:02:37.177 mempool/dpaa: not in enabled drivers build config 00:02:37.177 mempool/dpaa2: not in enabled drivers build config 00:02:37.177 mempool/octeontx: not in enabled drivers build config 00:02:37.177 mempool/stack: not in enabled drivers build config 00:02:37.177 dma/cnxk: not in enabled drivers build config 00:02:37.177 dma/dpaa: not in enabled drivers build config 00:02:37.177 dma/dpaa2: not in enabled drivers build config 00:02:37.177 dma/hisilicon: not in enabled drivers build config 00:02:37.177 dma/idxd: not in enabled drivers build config 00:02:37.177 dma/ioat: not in enabled drivers build config 00:02:37.177 dma/skeleton: not in enabled drivers build config 00:02:37.177 net/af_packet: not in enabled drivers build config 00:02:37.177 net/af_xdp: not in enabled drivers build config 00:02:37.177 net/ark: not in enabled drivers build config 00:02:37.177 net/atlantic: not in enabled drivers build config 00:02:37.177 net/avp: not in enabled drivers build config 00:02:37.177 net/axgbe: not in enabled drivers build config 00:02:37.177 net/bnx2x: not in enabled drivers build config 00:02:37.177 net/bnxt: not in enabled drivers build config 00:02:37.177 net/bonding: not in enabled drivers build config 00:02:37.177 net/cnxk: not in enabled drivers build config 00:02:37.177 net/cpfl: not in enabled drivers build config 00:02:37.177 net/cxgbe: not in enabled drivers build config 00:02:37.177 net/dpaa: not in enabled drivers build config 00:02:37.177 net/dpaa2: not in enabled drivers build config 00:02:37.177 net/e1000: not in enabled drivers build config 00:02:37.177 net/ena: not in enabled drivers build config 00:02:37.177 net/enetc: not in enabled drivers build config 00:02:37.177 net/enetfec: not in enabled drivers build config 00:02:37.177 net/enic: not in enabled drivers build config 00:02:37.177 net/failsafe: not in enabled drivers build config 00:02:37.177 net/fm10k: not in enabled drivers build config 00:02:37.177 net/gve: not in enabled drivers build config 00:02:37.177 net/hinic: not in enabled drivers build config 00:02:37.177 net/hns3: not in enabled drivers build config 00:02:37.177 net/i40e: not in enabled drivers build config 00:02:37.177 net/iavf: not in enabled drivers build config 00:02:37.177 net/ice: not in enabled drivers build config 00:02:37.177 net/idpf: not in enabled drivers build config 00:02:37.177 net/igc: not in enabled drivers build config 00:02:37.177 net/ionic: not in enabled drivers build config 00:02:37.177 net/ipn3ke: not in enabled drivers build config 00:02:37.177 net/ixgbe: not in enabled drivers build config 00:02:37.177 net/mana: not in enabled drivers build config 00:02:37.177 net/memif: not in enabled drivers build config 00:02:37.177 net/mlx4: not in enabled drivers build config 00:02:37.177 net/mlx5: not in enabled drivers build config 00:02:37.177 net/mvneta: not in enabled drivers build config 00:02:37.178 net/mvpp2: not in enabled drivers build config 00:02:37.178 net/netvsc: not in enabled drivers build config 00:02:37.178 net/nfb: not in enabled drivers build config 00:02:37.178 net/nfp: not in enabled drivers build config 00:02:37.178 net/ngbe: not in enabled drivers build config 00:02:37.178 net/null: not in enabled drivers build config 00:02:37.178 net/octeontx: not in enabled drivers build config 00:02:37.178 net/octeon_ep: not in enabled drivers build config 00:02:37.178 net/pcap: not in enabled drivers build config 00:02:37.178 net/pfe: not in enabled drivers build config 00:02:37.178 net/qede: not in enabled drivers build config 00:02:37.178 net/ring: not in enabled drivers build config 00:02:37.178 net/sfc: not in enabled drivers build config 00:02:37.178 net/softnic: not in enabled drivers build config 00:02:37.178 net/tap: not in enabled drivers build config 00:02:37.178 net/thunderx: not in enabled drivers build config 00:02:37.178 net/txgbe: not in enabled drivers build config 00:02:37.178 net/vdev_netvsc: not in enabled drivers build config 00:02:37.178 net/vhost: not in enabled drivers build config 00:02:37.178 net/virtio: not in enabled drivers build config 00:02:37.178 net/vmxnet3: not in enabled drivers build config 00:02:37.178 raw/*: missing internal dependency, "rawdev" 00:02:37.178 crypto/armv8: not in enabled drivers build config 00:02:37.178 crypto/bcmfs: not in enabled drivers build config 00:02:37.178 crypto/caam_jr: not in enabled drivers build config 00:02:37.178 crypto/ccp: not in enabled drivers build config 00:02:37.178 crypto/cnxk: not in enabled drivers build config 00:02:37.178 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.178 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.178 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.178 crypto/mlx5: not in enabled drivers build config 00:02:37.178 crypto/mvsam: not in enabled drivers build config 00:02:37.178 crypto/nitrox: not in enabled drivers build config 00:02:37.178 crypto/null: not in enabled drivers build config 00:02:37.178 crypto/octeontx: not in enabled drivers build config 00:02:37.178 crypto/openssl: not in enabled drivers build config 00:02:37.178 crypto/scheduler: not in enabled drivers build config 00:02:37.178 crypto/uadk: not in enabled drivers build config 00:02:37.178 crypto/virtio: not in enabled drivers build config 00:02:37.178 compress/isal: not in enabled drivers build config 00:02:37.178 compress/mlx5: not in enabled drivers build config 00:02:37.178 compress/nitrox: not in enabled drivers build config 00:02:37.178 compress/octeontx: not in enabled drivers build config 00:02:37.178 compress/zlib: not in enabled drivers build config 00:02:37.178 regex/*: missing internal dependency, "regexdev" 00:02:37.178 ml/*: missing internal dependency, "mldev" 00:02:37.178 vdpa/ifc: not in enabled drivers build config 00:02:37.178 vdpa/mlx5: not in enabled drivers build config 00:02:37.178 vdpa/nfp: not in enabled drivers build config 00:02:37.178 vdpa/sfc: not in enabled drivers build config 00:02:37.178 event/*: missing internal dependency, "eventdev" 00:02:37.178 baseband/*: missing internal dependency, "bbdev" 00:02:37.178 gpu/*: missing internal dependency, "gpudev" 00:02:37.178 00:02:37.178 00:02:37.178 Build targets in project: 85 00:02:37.178 00:02:37.178 DPDK 24.03.0 00:02:37.178 00:02:37.178 User defined options 00:02:37.178 buildtype : debug 00:02:37.178 default_library : shared 00:02:37.178 libdir : lib 00:02:37.178 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:37.178 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:37.178 c_link_args : 00:02:37.178 cpu_instruction_set: native 00:02:37.178 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:37.178 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:37.178 enable_docs : false 00:02:37.178 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:37.178 enable_kmods : false 00:02:37.178 max_lcores : 128 00:02:37.178 tests : false 00:02:37.178 00:02:37.178 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.178 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:37.178 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.178 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.178 [3/268] Linking static target lib/librte_kvargs.a 00:02:37.178 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.178 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.178 [6/268] Linking static target lib/librte_log.a 00:02:37.178 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.435 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.435 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.435 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.435 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.435 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.435 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.435 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.435 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.435 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.692 [17/268] Linking static target lib/librte_telemetry.a 00:02:37.692 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.951 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:37.951 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.951 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.951 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:37.951 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.951 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.951 [25/268] Linking target lib/librte_log.so.24.1 00:02:37.951 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.951 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.208 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.208 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.208 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:38.208 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.208 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:38.464 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.464 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.464 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.464 [36/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.464 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:38.464 [38/268] Linking target lib/librte_telemetry.so.24.1 00:02:38.464 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.464 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.724 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.724 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.724 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.724 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.724 [45/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:38.724 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.724 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.724 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.999 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.999 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.999 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.999 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.258 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.258 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.258 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.258 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.258 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.258 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.258 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.516 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:39.516 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.516 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.775 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.775 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.775 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.775 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.031 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.031 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.031 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.031 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.031 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.031 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.288 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.288 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.288 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.288 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.288 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.288 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.546 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.546 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.546 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.546 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.546 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.546 [84/268] Linking static target lib/librte_ring.a 00:02:40.804 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.804 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.804 [87/268] Linking static target lib/librte_eal.a 00:02:41.062 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.062 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.062 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.062 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.062 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.062 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.062 [94/268] Linking static target lib/librte_rcu.a 00:02:41.062 [95/268] Linking static target lib/librte_mempool.a 00:02:41.062 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.320 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.320 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:41.320 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:41.579 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:41.579 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.579 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.579 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:41.579 [104/268] Linking static target lib/librte_mbuf.a 00:02:41.579 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.579 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.837 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.837 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.837 [109/268] Linking static target lib/librte_net.a 00:02:41.837 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.095 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.095 [112/268] Linking static target lib/librte_meter.a 00:02:42.095 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.095 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.095 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.353 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.353 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.353 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.353 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.612 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.612 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.612 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.870 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.870 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.870 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.871 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.871 [127/268] Linking static target lib/librte_pci.a 00:02:43.129 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.129 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.129 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.129 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.129 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.129 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.129 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.129 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.387 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:43.387 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:43.387 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.387 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.387 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.387 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.387 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.387 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.387 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.387 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:43.387 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:43.646 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.646 [148/268] Linking static target lib/librte_cmdline.a 00:02:43.646 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.646 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.646 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.646 [152/268] Linking static target lib/librte_ethdev.a 00:02:43.646 [153/268] Linking static target lib/librte_timer.a 00:02:43.904 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:43.904 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:43.904 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.904 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.162 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.162 [159/268] Linking static target lib/librte_hash.a 00:02:44.162 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.162 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.162 [162/268] Linking static target lib/librte_compressdev.a 00:02:44.162 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.420 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.420 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.420 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.420 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.420 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.420 [169/268] Linking static target lib/librte_dmadev.a 00:02:44.677 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.677 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.677 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:45.004 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:45.004 [174/268] Linking static target lib/librte_cryptodev.a 00:02:45.004 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:45.004 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.004 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.293 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.293 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:45.293 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:45.293 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:45.293 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.293 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:45.293 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:45.550 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:45.550 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.550 [187/268] Linking static target lib/librte_power.a 00:02:45.550 [188/268] Linking static target lib/librte_reorder.a 00:02:45.550 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.550 [190/268] Linking static target lib/librte_security.a 00:02:45.807 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.807 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.807 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:46.065 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.065 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.323 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.323 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.323 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:46.323 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:46.580 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.580 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.580 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:46.580 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.838 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.838 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:46.838 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:46.838 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.838 [208/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.838 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:46.838 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:46.838 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:46.838 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.095 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.095 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.095 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.095 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.095 [217/268] Linking static target drivers/librte_bus_pci.a 00:02:47.095 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.095 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.095 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:47.095 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:47.095 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:47.354 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.354 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:47.354 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.354 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.354 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:47.614 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.180 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.180 [230/268] Linking static target lib/librte_vhost.a 00:02:50.711 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.237 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.237 [233/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.237 [234/268] Linking target lib/librte_eal.so.24.1 00:02:53.237 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.237 [236/268] Linking target lib/librte_ring.so.24.1 00:02:53.237 [237/268] Linking target lib/librte_timer.so.24.1 00:02:53.237 [238/268] Linking target lib/librte_meter.so.24.1 00:02:53.237 [239/268] Linking target lib/librte_pci.so.24.1 00:02:53.237 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:53.237 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.237 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.237 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.237 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.237 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.237 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.237 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:53.237 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:53.237 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.494 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.494 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.494 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:53.494 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.494 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:53.753 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:53.753 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:53.753 [257/268] Linking target lib/librte_net.so.24.1 00:02:53.753 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:53.753 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.753 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:54.011 [261/268] Linking target lib/librte_security.so.24.1 00:02:54.011 [262/268] Linking target lib/librte_hash.so.24.1 00:02:54.011 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:54.011 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:54.011 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:54.011 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:54.011 [267/268] Linking target lib/librte_power.so.24.1 00:02:54.011 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:54.011 INFO: autodetecting backend as ninja 00:02:54.011 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:55.386 CC lib/ut/ut.o 00:02:55.386 CC lib/log/log.o 00:02:55.386 CC lib/log/log_flags.o 00:02:55.386 CC lib/log/log_deprecated.o 00:02:55.386 CC lib/ut_mock/mock.o 00:02:55.386 LIB libspdk_ut_mock.a 00:02:55.386 LIB libspdk_log.a 00:02:55.386 LIB libspdk_ut.a 00:02:55.645 SO libspdk_ut_mock.so.6.0 00:02:55.645 SO libspdk_log.so.7.0 00:02:55.645 SO libspdk_ut.so.2.0 00:02:55.645 SYMLINK libspdk_ut_mock.so 00:02:55.645 SYMLINK libspdk_log.so 00:02:55.645 SYMLINK libspdk_ut.so 00:02:55.904 CC lib/dma/dma.o 00:02:55.904 CXX lib/trace_parser/trace.o 00:02:55.904 CC lib/ioat/ioat.o 00:02:55.904 CC lib/util/base64.o 00:02:55.904 CC lib/util/bit_array.o 00:02:55.904 CC lib/util/crc16.o 00:02:55.904 CC lib/util/cpuset.o 00:02:55.904 CC lib/util/crc32.o 00:02:55.904 CC lib/util/crc32c.o 00:02:55.904 CC lib/vfio_user/host/vfio_user_pci.o 00:02:56.163 CC lib/vfio_user/host/vfio_user.o 00:02:56.163 CC lib/util/crc32_ieee.o 00:02:56.163 CC lib/util/crc64.o 00:02:56.163 CC lib/util/dif.o 00:02:56.163 LIB libspdk_dma.a 00:02:56.163 SO libspdk_dma.so.4.0 00:02:56.163 CC lib/util/fd.o 00:02:56.163 CC lib/util/fd_group.o 00:02:56.163 SYMLINK libspdk_dma.so 00:02:56.163 LIB libspdk_ioat.a 00:02:56.163 CC lib/util/file.o 00:02:56.163 CC lib/util/hexlify.o 00:02:56.163 CC lib/util/iov.o 00:02:56.163 SO libspdk_ioat.so.7.0 00:02:56.163 LIB libspdk_vfio_user.a 00:02:56.163 CC lib/util/math.o 00:02:56.163 CC lib/util/net.o 00:02:56.163 SYMLINK libspdk_ioat.so 00:02:56.163 CC lib/util/pipe.o 00:02:56.163 SO libspdk_vfio_user.so.5.0 00:02:56.163 CC lib/util/strerror_tls.o 00:02:56.163 CC lib/util/string.o 00:02:56.163 SYMLINK libspdk_vfio_user.so 00:02:56.427 CC lib/util/uuid.o 00:02:56.427 CC lib/util/xor.o 00:02:56.427 CC lib/util/zipf.o 00:02:56.427 LIB libspdk_util.a 00:02:56.686 SO libspdk_util.so.9.1 00:02:56.686 LIB libspdk_trace_parser.a 00:02:56.686 SO libspdk_trace_parser.so.5.0 00:02:56.686 SYMLINK libspdk_util.so 00:02:56.946 SYMLINK libspdk_trace_parser.so 00:02:56.946 CC lib/vmd/vmd.o 00:02:56.946 CC lib/vmd/led.o 00:02:56.946 CC lib/rdma_utils/rdma_utils.o 00:02:56.946 CC lib/env_dpdk/memory.o 00:02:56.946 CC lib/env_dpdk/env.o 00:02:56.946 CC lib/env_dpdk/pci.o 00:02:56.946 CC lib/conf/conf.o 00:02:56.946 CC lib/json/json_parse.o 00:02:56.946 CC lib/rdma_provider/common.o 00:02:56.946 CC lib/idxd/idxd.o 00:02:57.205 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:57.205 CC lib/env_dpdk/init.o 00:02:57.205 CC lib/json/json_util.o 00:02:57.205 LIB libspdk_conf.a 00:02:57.205 SO libspdk_conf.so.6.0 00:02:57.205 LIB libspdk_rdma_utils.a 00:02:57.205 LIB libspdk_rdma_provider.a 00:02:57.205 SO libspdk_rdma_utils.so.1.0 00:02:57.205 SYMLINK libspdk_conf.so 00:02:57.205 CC lib/env_dpdk/threads.o 00:02:57.205 CC lib/env_dpdk/pci_ioat.o 00:02:57.205 SO libspdk_rdma_provider.so.6.0 00:02:57.205 SYMLINK libspdk_rdma_utils.so 00:02:57.464 CC lib/env_dpdk/pci_virtio.o 00:02:57.464 SYMLINK libspdk_rdma_provider.so 00:02:57.464 CC lib/idxd/idxd_user.o 00:02:57.464 CC lib/json/json_write.o 00:02:57.464 CC lib/env_dpdk/pci_vmd.o 00:02:57.465 CC lib/env_dpdk/pci_idxd.o 00:02:57.465 CC lib/env_dpdk/pci_event.o 00:02:57.465 CC lib/env_dpdk/sigbus_handler.o 00:02:57.465 CC lib/env_dpdk/pci_dpdk.o 00:02:57.465 LIB libspdk_vmd.a 00:02:57.465 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.465 SO libspdk_vmd.so.6.0 00:02:57.465 CC lib/idxd/idxd_kernel.o 00:02:57.465 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.723 SYMLINK libspdk_vmd.so 00:02:57.723 LIB libspdk_json.a 00:02:57.723 SO libspdk_json.so.6.0 00:02:57.723 LIB libspdk_idxd.a 00:02:57.723 SYMLINK libspdk_json.so 00:02:57.723 SO libspdk_idxd.so.12.0 00:02:57.982 SYMLINK libspdk_idxd.so 00:02:58.240 CC lib/jsonrpc/jsonrpc_server.o 00:02:58.240 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:58.240 CC lib/jsonrpc/jsonrpc_client.o 00:02:58.240 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:58.240 LIB libspdk_env_dpdk.a 00:02:58.240 SO libspdk_env_dpdk.so.14.1 00:02:58.499 LIB libspdk_jsonrpc.a 00:02:58.499 SO libspdk_jsonrpc.so.6.0 00:02:58.499 SYMLINK libspdk_env_dpdk.so 00:02:58.499 SYMLINK libspdk_jsonrpc.so 00:02:59.093 CC lib/rpc/rpc.o 00:02:59.093 LIB libspdk_rpc.a 00:02:59.093 SO libspdk_rpc.so.6.0 00:02:59.351 SYMLINK libspdk_rpc.so 00:02:59.609 CC lib/trace/trace.o 00:02:59.609 CC lib/trace/trace_flags.o 00:02:59.609 CC lib/keyring/keyring.o 00:02:59.609 CC lib/trace/trace_rpc.o 00:02:59.609 CC lib/keyring/keyring_rpc.o 00:02:59.609 CC lib/notify/notify.o 00:02:59.609 CC lib/notify/notify_rpc.o 00:02:59.868 LIB libspdk_notify.a 00:02:59.868 LIB libspdk_keyring.a 00:02:59.868 LIB libspdk_trace.a 00:02:59.868 SO libspdk_notify.so.6.0 00:02:59.868 SO libspdk_keyring.so.1.0 00:02:59.868 SO libspdk_trace.so.10.0 00:02:59.868 SYMLINK libspdk_notify.so 00:02:59.868 SYMLINK libspdk_keyring.so 00:02:59.868 SYMLINK libspdk_trace.so 00:03:00.434 CC lib/thread/thread.o 00:03:00.434 CC lib/thread/iobuf.o 00:03:00.434 CC lib/sock/sock.o 00:03:00.434 CC lib/sock/sock_rpc.o 00:03:00.692 LIB libspdk_sock.a 00:03:00.692 SO libspdk_sock.so.10.0 00:03:00.692 SYMLINK libspdk_sock.so 00:03:01.260 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.260 CC lib/nvme/nvme_ctrlr.o 00:03:01.260 CC lib/nvme/nvme_fabric.o 00:03:01.260 CC lib/nvme/nvme_ns_cmd.o 00:03:01.260 CC lib/nvme/nvme_ns.o 00:03:01.260 CC lib/nvme/nvme_pcie_common.o 00:03:01.260 CC lib/nvme/nvme_pcie.o 00:03:01.260 CC lib/nvme/nvme_qpair.o 00:03:01.260 CC lib/nvme/nvme.o 00:03:01.519 LIB libspdk_thread.a 00:03:01.519 SO libspdk_thread.so.10.1 00:03:01.778 SYMLINK libspdk_thread.so 00:03:01.778 CC lib/nvme/nvme_quirks.o 00:03:01.778 CC lib/nvme/nvme_transport.o 00:03:01.778 CC lib/nvme/nvme_discovery.o 00:03:01.778 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:02.036 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:02.036 CC lib/nvme/nvme_tcp.o 00:03:02.036 CC lib/nvme/nvme_opal.o 00:03:02.036 CC lib/nvme/nvme_io_msg.o 00:03:02.295 CC lib/nvme/nvme_poll_group.o 00:03:02.295 CC lib/nvme/nvme_zns.o 00:03:02.553 CC lib/nvme/nvme_stubs.o 00:03:02.553 CC lib/accel/accel.o 00:03:02.553 CC lib/blob/blobstore.o 00:03:02.553 CC lib/blob/request.o 00:03:02.553 CC lib/init/json_config.o 00:03:02.553 CC lib/virtio/virtio.o 00:03:02.812 CC lib/virtio/virtio_vhost_user.o 00:03:02.812 CC lib/virtio/virtio_vfio_user.o 00:03:02.812 CC lib/init/subsystem.o 00:03:02.812 CC lib/init/subsystem_rpc.o 00:03:02.812 CC lib/virtio/virtio_pci.o 00:03:02.812 CC lib/accel/accel_rpc.o 00:03:03.070 CC lib/accel/accel_sw.o 00:03:03.070 CC lib/init/rpc.o 00:03:03.070 CC lib/nvme/nvme_auth.o 00:03:03.070 CC lib/nvme/nvme_cuse.o 00:03:03.070 CC lib/nvme/nvme_rdma.o 00:03:03.070 CC lib/blob/zeroes.o 00:03:03.070 LIB libspdk_init.a 00:03:03.070 LIB libspdk_virtio.a 00:03:03.070 SO libspdk_init.so.5.0 00:03:03.070 SO libspdk_virtio.so.7.0 00:03:03.070 CC lib/blob/blob_bs_dev.o 00:03:03.328 SYMLINK libspdk_init.so 00:03:03.328 SYMLINK libspdk_virtio.so 00:03:03.328 LIB libspdk_accel.a 00:03:03.328 SO libspdk_accel.so.15.1 00:03:03.587 SYMLINK libspdk_accel.so 00:03:03.587 CC lib/event/app.o 00:03:03.587 CC lib/event/reactor.o 00:03:03.587 CC lib/event/scheduler_static.o 00:03:03.587 CC lib/event/log_rpc.o 00:03:03.587 CC lib/event/app_rpc.o 00:03:03.846 CC lib/bdev/bdev.o 00:03:03.846 CC lib/bdev/bdev_rpc.o 00:03:03.846 CC lib/bdev/part.o 00:03:03.846 CC lib/bdev/bdev_zone.o 00:03:03.846 CC lib/bdev/scsi_nvme.o 00:03:03.846 LIB libspdk_event.a 00:03:04.104 SO libspdk_event.so.14.0 00:03:04.104 SYMLINK libspdk_event.so 00:03:04.104 LIB libspdk_nvme.a 00:03:04.362 SO libspdk_nvme.so.13.1 00:03:04.621 SYMLINK libspdk_nvme.so 00:03:04.880 LIB libspdk_blob.a 00:03:05.139 SO libspdk_blob.so.11.0 00:03:05.139 SYMLINK libspdk_blob.so 00:03:05.707 CC lib/blobfs/blobfs.o 00:03:05.707 CC lib/blobfs/tree.o 00:03:05.707 CC lib/lvol/lvol.o 00:03:05.966 LIB libspdk_bdev.a 00:03:05.966 SO libspdk_bdev.so.15.1 00:03:06.224 SYMLINK libspdk_bdev.so 00:03:06.224 LIB libspdk_blobfs.a 00:03:06.224 SO libspdk_blobfs.so.10.0 00:03:06.224 CC lib/scsi/dev.o 00:03:06.224 CC lib/scsi/lun.o 00:03:06.224 CC lib/ublk/ublk.o 00:03:06.224 CC lib/scsi/port.o 00:03:06.224 CC lib/nvmf/ctrlr.o 00:03:06.224 CC lib/ublk/ublk_rpc.o 00:03:06.224 LIB libspdk_lvol.a 00:03:06.483 CC lib/nbd/nbd.o 00:03:06.483 CC lib/ftl/ftl_core.o 00:03:06.483 SO libspdk_lvol.so.10.0 00:03:06.483 SYMLINK libspdk_blobfs.so 00:03:06.483 CC lib/ftl/ftl_init.o 00:03:06.483 SYMLINK libspdk_lvol.so 00:03:06.483 CC lib/ftl/ftl_layout.o 00:03:06.483 CC lib/ftl/ftl_debug.o 00:03:06.483 CC lib/ftl/ftl_io.o 00:03:06.483 CC lib/scsi/scsi.o 00:03:06.483 CC lib/nvmf/ctrlr_discovery.o 00:03:06.483 CC lib/ftl/ftl_sb.o 00:03:06.743 CC lib/scsi/scsi_bdev.o 00:03:06.743 CC lib/nvmf/ctrlr_bdev.o 00:03:06.743 CC lib/nbd/nbd_rpc.o 00:03:06.743 CC lib/scsi/scsi_pr.o 00:03:06.743 CC lib/scsi/scsi_rpc.o 00:03:06.743 CC lib/nvmf/subsystem.o 00:03:06.743 CC lib/ftl/ftl_l2p.o 00:03:06.743 LIB libspdk_ublk.a 00:03:07.002 LIB libspdk_nbd.a 00:03:07.002 CC lib/scsi/task.o 00:03:07.002 SO libspdk_ublk.so.3.0 00:03:07.002 SO libspdk_nbd.so.7.0 00:03:07.002 CC lib/ftl/ftl_l2p_flat.o 00:03:07.002 SYMLINK libspdk_nbd.so 00:03:07.002 SYMLINK libspdk_ublk.so 00:03:07.002 CC lib/ftl/ftl_nv_cache.o 00:03:07.002 CC lib/nvmf/nvmf.o 00:03:07.002 CC lib/nvmf/nvmf_rpc.o 00:03:07.002 CC lib/nvmf/transport.o 00:03:07.002 CC lib/nvmf/tcp.o 00:03:07.002 LIB libspdk_scsi.a 00:03:07.260 CC lib/nvmf/stubs.o 00:03:07.260 SO libspdk_scsi.so.9.0 00:03:07.260 CC lib/ftl/ftl_band.o 00:03:07.260 SYMLINK libspdk_scsi.so 00:03:07.260 CC lib/ftl/ftl_band_ops.o 00:03:07.518 CC lib/ftl/ftl_writer.o 00:03:07.518 CC lib/nvmf/mdns_server.o 00:03:07.518 CC lib/ftl/ftl_rq.o 00:03:07.518 CC lib/ftl/ftl_reloc.o 00:03:07.780 CC lib/ftl/ftl_l2p_cache.o 00:03:07.780 CC lib/nvmf/rdma.o 00:03:07.780 CC lib/ftl/ftl_p2l.o 00:03:07.780 CC lib/nvmf/auth.o 00:03:07.780 CC lib/ftl/mngt/ftl_mngt.o 00:03:07.780 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.780 CC lib/iscsi/conn.o 00:03:08.037 CC lib/vhost/vhost.o 00:03:08.037 CC lib/vhost/vhost_rpc.o 00:03:08.037 CC lib/vhost/vhost_scsi.o 00:03:08.037 CC lib/iscsi/init_grp.o 00:03:08.037 CC lib/iscsi/iscsi.o 00:03:08.037 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.297 CC lib/vhost/vhost_blk.o 00:03:08.297 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.297 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.297 CC lib/iscsi/md5.o 00:03:08.297 CC lib/iscsi/param.o 00:03:08.555 CC lib/iscsi/portal_grp.o 00:03:08.555 CC lib/iscsi/tgt_node.o 00:03:08.555 CC lib/iscsi/iscsi_subsystem.o 00:03:08.555 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.555 CC lib/vhost/rte_vhost_user.o 00:03:08.812 CC lib/iscsi/iscsi_rpc.o 00:03:08.812 CC lib/iscsi/task.o 00:03:08.812 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.812 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.812 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.812 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.812 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.070 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.070 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.070 CC lib/ftl/utils/ftl_conf.o 00:03:09.070 CC lib/ftl/utils/ftl_md.o 00:03:09.070 CC lib/ftl/utils/ftl_mempool.o 00:03:09.070 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.070 CC lib/ftl/utils/ftl_property.o 00:03:09.070 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.328 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.328 LIB libspdk_iscsi.a 00:03:09.328 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:09.328 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:09.328 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.328 SO libspdk_iscsi.so.8.0 00:03:09.328 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.328 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.328 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.328 LIB libspdk_nvmf.a 00:03:09.585 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.585 LIB libspdk_vhost.a 00:03:09.585 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.585 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.585 CC lib/ftl/base/ftl_base_dev.o 00:03:09.585 SYMLINK libspdk_iscsi.so 00:03:09.585 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.585 SO libspdk_nvmf.so.19.0 00:03:09.585 SO libspdk_vhost.so.8.0 00:03:09.585 CC lib/ftl/ftl_trace.o 00:03:09.585 SYMLINK libspdk_vhost.so 00:03:09.843 SYMLINK libspdk_nvmf.so 00:03:09.843 LIB libspdk_ftl.a 00:03:10.101 SO libspdk_ftl.so.9.0 00:03:10.357 SYMLINK libspdk_ftl.so 00:03:10.922 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.922 CC module/sock/posix/posix.o 00:03:10.922 CC module/accel/error/accel_error.o 00:03:10.922 CC module/sock/uring/uring.o 00:03:10.922 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.922 CC module/accel/ioat/accel_ioat.o 00:03:10.922 CC module/keyring/file/keyring.o 00:03:10.922 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.922 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.922 LIB libspdk_env_dpdk_rpc.a 00:03:10.922 CC module/blob/bdev/blob_bdev.o 00:03:10.922 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.922 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.922 CC module/keyring/file/keyring_rpc.o 00:03:10.922 LIB libspdk_scheduler_gscheduler.a 00:03:10.922 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.922 SO libspdk_scheduler_gscheduler.so.4.0 00:03:11.185 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:11.185 LIB libspdk_scheduler_dynamic.a 00:03:11.185 CC module/accel/error/accel_error_rpc.o 00:03:11.185 SO libspdk_scheduler_dynamic.so.4.0 00:03:11.185 SYMLINK libspdk_scheduler_gscheduler.so 00:03:11.185 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:11.185 CC module/accel/ioat/accel_ioat_rpc.o 00:03:11.185 SYMLINK libspdk_scheduler_dynamic.so 00:03:11.185 LIB libspdk_keyring_file.a 00:03:11.185 CC module/accel/dsa/accel_dsa_rpc.o 00:03:11.185 SO libspdk_keyring_file.so.1.0 00:03:11.185 LIB libspdk_blob_bdev.a 00:03:11.185 CC module/accel/dsa/accel_dsa.o 00:03:11.185 LIB libspdk_accel_error.a 00:03:11.185 SO libspdk_blob_bdev.so.11.0 00:03:11.185 SO libspdk_accel_error.so.2.0 00:03:11.185 SYMLINK libspdk_keyring_file.so 00:03:11.185 LIB libspdk_accel_ioat.a 00:03:11.185 SYMLINK libspdk_blob_bdev.so 00:03:11.185 SO libspdk_accel_ioat.so.6.0 00:03:11.185 SYMLINK libspdk_accel_error.so 00:03:11.185 CC module/keyring/linux/keyring.o 00:03:11.442 CC module/keyring/linux/keyring_rpc.o 00:03:11.442 CC module/accel/iaa/accel_iaa.o 00:03:11.442 CC module/accel/iaa/accel_iaa_rpc.o 00:03:11.442 SYMLINK libspdk_accel_ioat.so 00:03:11.442 LIB libspdk_keyring_linux.a 00:03:11.442 LIB libspdk_accel_dsa.a 00:03:11.442 SO libspdk_keyring_linux.so.1.0 00:03:11.442 SO libspdk_accel_dsa.so.5.0 00:03:11.442 LIB libspdk_sock_posix.a 00:03:11.442 LIB libspdk_sock_uring.a 00:03:11.442 CC module/bdev/error/vbdev_error.o 00:03:11.442 LIB libspdk_accel_iaa.a 00:03:11.442 CC module/bdev/delay/vbdev_delay.o 00:03:11.442 SO libspdk_sock_posix.so.6.0 00:03:11.442 SYMLINK libspdk_keyring_linux.so 00:03:11.442 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.442 SO libspdk_sock_uring.so.5.0 00:03:11.442 SYMLINK libspdk_accel_dsa.so 00:03:11.442 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.442 CC module/blobfs/bdev/blobfs_bdev.o 00:03:11.442 SO libspdk_accel_iaa.so.3.0 00:03:11.700 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.700 CC module/bdev/gpt/gpt.o 00:03:11.700 SYMLINK libspdk_sock_uring.so 00:03:11.700 SYMLINK libspdk_sock_posix.so 00:03:11.700 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.700 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.700 SYMLINK libspdk_accel_iaa.so 00:03:11.700 LIB libspdk_blobfs_bdev.a 00:03:11.700 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.700 LIB libspdk_bdev_error.a 00:03:11.700 CC module/bdev/malloc/bdev_malloc.o 00:03:11.958 SO libspdk_blobfs_bdev.so.6.0 00:03:11.958 SO libspdk_bdev_error.so.6.0 00:03:11.958 LIB libspdk_bdev_delay.a 00:03:11.958 CC module/bdev/null/bdev_null.o 00:03:11.958 SO libspdk_bdev_delay.so.6.0 00:03:11.958 CC module/bdev/nvme/bdev_nvme.o 00:03:11.958 SYMLINK libspdk_blobfs_bdev.so 00:03:11.958 SYMLINK libspdk_bdev_error.so 00:03:11.958 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.958 SYMLINK libspdk_bdev_delay.so 00:03:11.958 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.958 LIB libspdk_bdev_lvol.a 00:03:11.958 LIB libspdk_bdev_gpt.a 00:03:11.958 SO libspdk_bdev_lvol.so.6.0 00:03:11.958 SO libspdk_bdev_gpt.so.6.0 00:03:12.227 CC module/bdev/null/bdev_null_rpc.o 00:03:12.227 CC module/bdev/raid/bdev_raid.o 00:03:12.227 SYMLINK libspdk_bdev_lvol.so 00:03:12.227 SYMLINK libspdk_bdev_gpt.so 00:03:12.227 CC module/bdev/split/vbdev_split.o 00:03:12.227 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.227 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.227 LIB libspdk_bdev_malloc.a 00:03:12.227 SO libspdk_bdev_malloc.so.6.0 00:03:12.227 CC module/bdev/uring/bdev_uring.o 00:03:12.227 SYMLINK libspdk_bdev_malloc.so 00:03:12.227 CC module/bdev/aio/bdev_aio.o 00:03:12.227 LIB libspdk_bdev_null.a 00:03:12.227 LIB libspdk_bdev_passthru.a 00:03:12.227 SO libspdk_bdev_null.so.6.0 00:03:12.227 SO libspdk_bdev_passthru.so.6.0 00:03:12.227 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.494 SYMLINK libspdk_bdev_null.so 00:03:12.494 SYMLINK libspdk_bdev_passthru.so 00:03:12.494 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.495 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.495 CC module/bdev/ftl/bdev_ftl.o 00:03:12.495 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.495 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.495 LIB libspdk_bdev_split.a 00:03:12.495 LIB libspdk_bdev_zone_block.a 00:03:12.495 SO libspdk_bdev_split.so.6.0 00:03:12.495 SO libspdk_bdev_zone_block.so.6.0 00:03:12.495 LIB libspdk_bdev_aio.a 00:03:12.495 CC module/bdev/uring/bdev_uring_rpc.o 00:03:12.495 SYMLINK libspdk_bdev_split.so 00:03:12.495 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.752 SO libspdk_bdev_aio.so.6.0 00:03:12.752 SYMLINK libspdk_bdev_zone_block.so 00:03:12.752 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.752 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.752 LIB libspdk_bdev_ftl.a 00:03:12.752 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.752 SYMLINK libspdk_bdev_aio.so 00:03:12.752 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.752 SO libspdk_bdev_ftl.so.6.0 00:03:12.752 CC module/bdev/raid/raid0.o 00:03:12.752 LIB libspdk_bdev_uring.a 00:03:12.752 SO libspdk_bdev_uring.so.6.0 00:03:12.752 LIB libspdk_bdev_iscsi.a 00:03:12.752 SYMLINK libspdk_bdev_ftl.so 00:03:12.752 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.752 CC module/bdev/raid/raid1.o 00:03:12.752 SO libspdk_bdev_iscsi.so.6.0 00:03:12.752 SYMLINK libspdk_bdev_uring.so 00:03:12.752 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.010 SYMLINK libspdk_bdev_iscsi.so 00:03:13.010 CC module/bdev/raid/concat.o 00:03:13.010 CC module/bdev/nvme/nvme_rpc.o 00:03:13.010 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.010 CC module/bdev/nvme/vbdev_opal.o 00:03:13.010 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:13.010 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:13.010 LIB libspdk_bdev_raid.a 00:03:13.010 LIB libspdk_bdev_virtio.a 00:03:13.268 SO libspdk_bdev_raid.so.6.0 00:03:13.268 SO libspdk_bdev_virtio.so.6.0 00:03:13.268 SYMLINK libspdk_bdev_virtio.so 00:03:13.268 SYMLINK libspdk_bdev_raid.so 00:03:13.922 LIB libspdk_bdev_nvme.a 00:03:13.922 SO libspdk_bdev_nvme.so.7.0 00:03:13.922 SYMLINK libspdk_bdev_nvme.so 00:03:14.856 CC module/event/subsystems/sock/sock.o 00:03:14.856 CC module/event/subsystems/scheduler/scheduler.o 00:03:14.856 CC module/event/subsystems/keyring/keyring.o 00:03:14.856 CC module/event/subsystems/vmd/vmd.o 00:03:14.856 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:14.856 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:14.856 CC module/event/subsystems/iobuf/iobuf.o 00:03:14.856 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:14.856 LIB libspdk_event_scheduler.a 00:03:14.856 LIB libspdk_event_keyring.a 00:03:14.856 LIB libspdk_event_sock.a 00:03:14.856 LIB libspdk_event_vmd.a 00:03:14.856 LIB libspdk_event_vhost_blk.a 00:03:14.856 SO libspdk_event_scheduler.so.4.0 00:03:14.856 SO libspdk_event_keyring.so.1.0 00:03:14.856 LIB libspdk_event_iobuf.a 00:03:14.856 SO libspdk_event_sock.so.5.0 00:03:14.856 SO libspdk_event_vmd.so.6.0 00:03:14.856 SO libspdk_event_vhost_blk.so.3.0 00:03:14.856 SO libspdk_event_iobuf.so.3.0 00:03:14.856 SYMLINK libspdk_event_keyring.so 00:03:14.856 SYMLINK libspdk_event_scheduler.so 00:03:14.856 SYMLINK libspdk_event_sock.so 00:03:14.856 SYMLINK libspdk_event_vhost_blk.so 00:03:14.856 SYMLINK libspdk_event_vmd.so 00:03:14.856 SYMLINK libspdk_event_iobuf.so 00:03:15.424 CC module/event/subsystems/accel/accel.o 00:03:15.424 LIB libspdk_event_accel.a 00:03:15.424 SO libspdk_event_accel.so.6.0 00:03:15.682 SYMLINK libspdk_event_accel.so 00:03:15.942 CC module/event/subsystems/bdev/bdev.o 00:03:16.201 LIB libspdk_event_bdev.a 00:03:16.201 SO libspdk_event_bdev.so.6.0 00:03:16.201 SYMLINK libspdk_event_bdev.so 00:03:16.766 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.766 CC module/event/subsystems/nbd/nbd.o 00:03:16.766 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.766 CC module/event/subsystems/ublk/ublk.o 00:03:16.766 CC module/event/subsystems/scsi/scsi.o 00:03:16.766 LIB libspdk_event_ublk.a 00:03:16.766 LIB libspdk_event_nbd.a 00:03:16.766 LIB libspdk_event_scsi.a 00:03:16.766 SO libspdk_event_ublk.so.3.0 00:03:16.766 SO libspdk_event_nbd.so.6.0 00:03:16.766 LIB libspdk_event_nvmf.a 00:03:16.766 SO libspdk_event_scsi.so.6.0 00:03:16.766 SYMLINK libspdk_event_ublk.so 00:03:16.766 SO libspdk_event_nvmf.so.6.0 00:03:16.766 SYMLINK libspdk_event_nbd.so 00:03:17.024 SYMLINK libspdk_event_scsi.so 00:03:17.024 SYMLINK libspdk_event_nvmf.so 00:03:17.281 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.281 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.538 LIB libspdk_event_vhost_scsi.a 00:03:17.538 LIB libspdk_event_iscsi.a 00:03:17.538 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.538 SO libspdk_event_iscsi.so.6.0 00:03:17.538 SYMLINK libspdk_event_vhost_scsi.so 00:03:17.538 SYMLINK libspdk_event_iscsi.so 00:03:17.795 SO libspdk.so.6.0 00:03:17.795 SYMLINK libspdk.so 00:03:18.051 CC app/trace_record/trace_record.o 00:03:18.051 CXX app/trace/trace.o 00:03:18.051 TEST_HEADER include/spdk/accel.h 00:03:18.051 TEST_HEADER include/spdk/accel_module.h 00:03:18.051 TEST_HEADER include/spdk/assert.h 00:03:18.051 TEST_HEADER include/spdk/barrier.h 00:03:18.051 TEST_HEADER include/spdk/base64.h 00:03:18.051 TEST_HEADER include/spdk/bdev.h 00:03:18.051 TEST_HEADER include/spdk/bdev_module.h 00:03:18.051 TEST_HEADER include/spdk/bdev_zone.h 00:03:18.051 TEST_HEADER include/spdk/bit_array.h 00:03:18.051 TEST_HEADER include/spdk/bit_pool.h 00:03:18.051 TEST_HEADER include/spdk/blob_bdev.h 00:03:18.051 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:18.051 TEST_HEADER include/spdk/blobfs.h 00:03:18.051 CC app/nvmf_tgt/nvmf_main.o 00:03:18.051 TEST_HEADER include/spdk/blob.h 00:03:18.051 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.051 TEST_HEADER include/spdk/conf.h 00:03:18.051 TEST_HEADER include/spdk/config.h 00:03:18.051 TEST_HEADER include/spdk/cpuset.h 00:03:18.051 TEST_HEADER include/spdk/crc16.h 00:03:18.051 TEST_HEADER include/spdk/crc32.h 00:03:18.051 TEST_HEADER include/spdk/crc64.h 00:03:18.051 TEST_HEADER include/spdk/dif.h 00:03:18.051 TEST_HEADER include/spdk/dma.h 00:03:18.051 TEST_HEADER include/spdk/endian.h 00:03:18.051 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.051 TEST_HEADER include/spdk/env.h 00:03:18.051 CC examples/util/zipf/zipf.o 00:03:18.051 TEST_HEADER include/spdk/event.h 00:03:18.051 TEST_HEADER include/spdk/fd_group.h 00:03:18.051 TEST_HEADER include/spdk/fd.h 00:03:18.051 TEST_HEADER include/spdk/file.h 00:03:18.051 TEST_HEADER include/spdk/ftl.h 00:03:18.051 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.051 CC app/spdk_tgt/spdk_tgt.o 00:03:18.309 TEST_HEADER include/spdk/hexlify.h 00:03:18.309 CC test/thread/poller_perf/poller_perf.o 00:03:18.309 TEST_HEADER include/spdk/histogram_data.h 00:03:18.309 TEST_HEADER include/spdk/idxd.h 00:03:18.309 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.309 TEST_HEADER include/spdk/init.h 00:03:18.309 TEST_HEADER include/spdk/ioat.h 00:03:18.309 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.309 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.309 TEST_HEADER include/spdk/json.h 00:03:18.309 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.309 TEST_HEADER include/spdk/keyring.h 00:03:18.309 TEST_HEADER include/spdk/keyring_module.h 00:03:18.309 TEST_HEADER include/spdk/likely.h 00:03:18.309 TEST_HEADER include/spdk/log.h 00:03:18.309 TEST_HEADER include/spdk/lvol.h 00:03:18.309 TEST_HEADER include/spdk/memory.h 00:03:18.309 TEST_HEADER include/spdk/mmio.h 00:03:18.309 TEST_HEADER include/spdk/nbd.h 00:03:18.309 CC test/dma/test_dma/test_dma.o 00:03:18.309 TEST_HEADER include/spdk/net.h 00:03:18.309 TEST_HEADER include/spdk/notify.h 00:03:18.309 TEST_HEADER include/spdk/nvme.h 00:03:18.309 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.309 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.309 CC test/app/bdev_svc/bdev_svc.o 00:03:18.309 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.309 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.309 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.309 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.309 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.309 TEST_HEADER include/spdk/nvmf.h 00:03:18.309 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.309 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.309 TEST_HEADER include/spdk/opal.h 00:03:18.309 TEST_HEADER include/spdk/opal_spec.h 00:03:18.309 TEST_HEADER include/spdk/pci_ids.h 00:03:18.309 TEST_HEADER include/spdk/pipe.h 00:03:18.309 TEST_HEADER include/spdk/queue.h 00:03:18.309 TEST_HEADER include/spdk/reduce.h 00:03:18.309 TEST_HEADER include/spdk/rpc.h 00:03:18.309 TEST_HEADER include/spdk/scheduler.h 00:03:18.309 TEST_HEADER include/spdk/scsi.h 00:03:18.309 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.310 TEST_HEADER include/spdk/sock.h 00:03:18.310 TEST_HEADER include/spdk/stdinc.h 00:03:18.310 TEST_HEADER include/spdk/string.h 00:03:18.310 TEST_HEADER include/spdk/thread.h 00:03:18.310 TEST_HEADER include/spdk/trace.h 00:03:18.310 TEST_HEADER include/spdk/trace_parser.h 00:03:18.310 TEST_HEADER include/spdk/tree.h 00:03:18.310 TEST_HEADER include/spdk/ublk.h 00:03:18.310 TEST_HEADER include/spdk/util.h 00:03:18.310 TEST_HEADER include/spdk/uuid.h 00:03:18.310 TEST_HEADER include/spdk/version.h 00:03:18.310 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.310 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.310 TEST_HEADER include/spdk/vhost.h 00:03:18.310 TEST_HEADER include/spdk/vmd.h 00:03:18.310 TEST_HEADER include/spdk/xor.h 00:03:18.310 TEST_HEADER include/spdk/zipf.h 00:03:18.310 CXX test/cpp_headers/accel.o 00:03:18.310 LINK zipf 00:03:18.310 LINK nvmf_tgt 00:03:18.310 LINK spdk_trace_record 00:03:18.310 LINK iscsi_tgt 00:03:18.310 LINK poller_perf 00:03:18.310 LINK spdk_tgt 00:03:18.310 LINK bdev_svc 00:03:18.568 LINK spdk_trace 00:03:18.568 CXX test/cpp_headers/accel_module.o 00:03:18.568 CXX test/cpp_headers/assert.o 00:03:18.568 CXX test/cpp_headers/barrier.o 00:03:18.568 CXX test/cpp_headers/base64.o 00:03:18.568 CXX test/cpp_headers/bdev.o 00:03:18.568 LINK test_dma 00:03:18.568 CC examples/ioat/perf/perf.o 00:03:18.856 CC examples/ioat/verify/verify.o 00:03:18.856 CXX test/cpp_headers/bdev_module.o 00:03:18.856 CC app/spdk_lspci/spdk_lspci.o 00:03:18.856 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.856 CXX test/cpp_headers/bdev_zone.o 00:03:18.856 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.856 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.856 CC examples/idxd/perf/perf.o 00:03:18.856 LINK ioat_perf 00:03:18.856 LINK spdk_lspci 00:03:18.856 CC examples/thread/thread/thread_ex.o 00:03:18.856 LINK verify 00:03:18.856 LINK interrupt_tgt 00:03:18.856 LINK lsvmd 00:03:18.857 CXX test/cpp_headers/bit_array.o 00:03:19.115 CC examples/vmd/led/led.o 00:03:19.115 LINK idxd_perf 00:03:19.115 CC app/spdk_nvme_perf/perf.o 00:03:19.115 CXX test/cpp_headers/bit_pool.o 00:03:19.115 CXX test/cpp_headers/blob_bdev.o 00:03:19.115 LINK thread 00:03:19.115 CC app/spdk_nvme_identify/identify.o 00:03:19.115 LINK nvme_fuzz 00:03:19.115 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.115 CC test/env/vtophys/vtophys.o 00:03:19.115 LINK led 00:03:19.372 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.372 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.372 LINK vtophys 00:03:19.372 CC app/spdk_top/spdk_top.o 00:03:19.372 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.372 CXX test/cpp_headers/blobfs.o 00:03:19.372 CC app/vhost/vhost.o 00:03:19.372 CXX test/cpp_headers/blob.o 00:03:19.630 LINK spdk_nvme_discover 00:03:19.630 CC examples/sock/hello_world/hello_sock.o 00:03:19.630 CXX test/cpp_headers/conf.o 00:03:19.630 LINK vhost 00:03:19.630 CXX test/cpp_headers/config.o 00:03:19.630 LINK mem_callbacks 00:03:19.630 CC app/spdk_dd/spdk_dd.o 00:03:19.889 CXX test/cpp_headers/cpuset.o 00:03:19.889 LINK hello_sock 00:03:19.889 LINK spdk_nvme_perf 00:03:19.889 LINK spdk_nvme_identify 00:03:19.889 CC app/fio/nvme/fio_plugin.o 00:03:19.889 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:19.889 CXX test/cpp_headers/crc16.o 00:03:19.889 CC app/fio/bdev/fio_plugin.o 00:03:20.148 LINK env_dpdk_post_init 00:03:20.148 CXX test/cpp_headers/crc32.o 00:03:20.148 LINK spdk_top 00:03:20.148 CC examples/accel/perf/accel_perf.o 00:03:20.148 LINK spdk_dd 00:03:20.148 CC test/app/histogram_perf/histogram_perf.o 00:03:20.148 CC examples/blob/hello_world/hello_blob.o 00:03:20.148 CXX test/cpp_headers/crc64.o 00:03:20.148 CXX test/cpp_headers/dif.o 00:03:20.406 LINK histogram_perf 00:03:20.406 CC test/env/memory/memory_ut.o 00:03:20.406 CXX test/cpp_headers/dma.o 00:03:20.406 LINK spdk_nvme 00:03:20.406 LINK spdk_bdev 00:03:20.406 LINK hello_blob 00:03:20.406 CXX test/cpp_headers/endian.o 00:03:20.406 CC test/app/jsoncat/jsoncat.o 00:03:20.406 CC test/app/stub/stub.o 00:03:20.406 CXX test/cpp_headers/env_dpdk.o 00:03:20.406 LINK accel_perf 00:03:20.664 CC examples/nvme/hello_world/hello_world.o 00:03:20.664 LINK jsoncat 00:03:20.664 CC examples/nvme/reconnect/reconnect.o 00:03:20.664 CXX test/cpp_headers/env.o 00:03:20.664 LINK stub 00:03:20.664 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.664 CC examples/blob/cli/blobcli.o 00:03:20.664 CC examples/nvme/arbitration/arbitration.o 00:03:20.664 LINK iscsi_fuzz 00:03:20.664 CXX test/cpp_headers/event.o 00:03:20.664 LINK hello_world 00:03:20.664 CXX test/cpp_headers/fd_group.o 00:03:20.922 LINK reconnect 00:03:20.922 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.922 CXX test/cpp_headers/fd.o 00:03:20.922 LINK arbitration 00:03:20.922 CC examples/nvme/hotplug/hotplug.o 00:03:21.180 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.180 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.180 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.180 CXX test/cpp_headers/file.o 00:03:21.180 LINK nvme_manage 00:03:21.180 LINK blobcli 00:03:21.180 LINK hello_bdev 00:03:21.180 CXX test/cpp_headers/ftl.o 00:03:21.180 CXX test/cpp_headers/gpt_spec.o 00:03:21.180 LINK memory_ut 00:03:21.180 LINK hotplug 00:03:21.437 CXX test/cpp_headers/hexlify.o 00:03:21.437 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.437 CXX test/cpp_headers/histogram_data.o 00:03:21.437 CC examples/nvme/abort/abort.o 00:03:21.437 LINK vhost_fuzz 00:03:21.437 CXX test/cpp_headers/idxd.o 00:03:21.437 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.437 CC test/event/event_perf/event_perf.o 00:03:21.694 CC test/env/pci/pci_ut.o 00:03:21.694 LINK cmb_copy 00:03:21.694 CC test/event/reactor/reactor.o 00:03:21.694 CXX test/cpp_headers/idxd_spec.o 00:03:21.694 CC test/event/reactor_perf/reactor_perf.o 00:03:21.694 LINK pmr_persistence 00:03:21.694 CC test/event/app_repeat/app_repeat.o 00:03:21.694 LINK event_perf 00:03:21.694 LINK abort 00:03:21.694 CXX test/cpp_headers/init.o 00:03:21.694 LINK reactor 00:03:21.694 LINK bdevperf 00:03:21.694 LINK reactor_perf 00:03:21.694 CXX test/cpp_headers/ioat.o 00:03:21.952 CXX test/cpp_headers/ioat_spec.o 00:03:21.952 LINK app_repeat 00:03:21.952 CC test/event/scheduler/scheduler.o 00:03:21.952 CXX test/cpp_headers/iscsi_spec.o 00:03:21.952 CXX test/cpp_headers/json.o 00:03:21.952 LINK pci_ut 00:03:21.952 CC test/rpc_client/rpc_client_test.o 00:03:21.952 CXX test/cpp_headers/jsonrpc.o 00:03:21.952 CXX test/cpp_headers/keyring.o 00:03:21.952 CXX test/cpp_headers/keyring_module.o 00:03:21.952 CC test/nvme/aer/aer.o 00:03:22.210 LINK scheduler 00:03:22.210 CC test/nvme/reset/reset.o 00:03:22.210 LINK rpc_client_test 00:03:22.210 CC examples/nvmf/nvmf/nvmf.o 00:03:22.210 CXX test/cpp_headers/likely.o 00:03:22.210 CC test/nvme/sgl/sgl.o 00:03:22.210 CC test/accel/dif/dif.o 00:03:22.210 CXX test/cpp_headers/log.o 00:03:22.210 LINK aer 00:03:22.468 LINK reset 00:03:22.468 CC test/blobfs/mkfs/mkfs.o 00:03:22.468 CC test/nvme/e2edp/nvme_dp.o 00:03:22.468 CC test/lvol/esnap/esnap.o 00:03:22.468 LINK nvmf 00:03:22.468 CC test/nvme/overhead/overhead.o 00:03:22.468 CXX test/cpp_headers/lvol.o 00:03:22.468 LINK sgl 00:03:22.468 LINK mkfs 00:03:22.468 CC test/nvme/err_injection/err_injection.o 00:03:22.725 CC test/nvme/startup/startup.o 00:03:22.725 CXX test/cpp_headers/memory.o 00:03:22.725 LINK nvme_dp 00:03:22.725 LINK overhead 00:03:22.725 LINK dif 00:03:22.725 CC test/nvme/reserve/reserve.o 00:03:22.725 CC test/nvme/simple_copy/simple_copy.o 00:03:22.725 LINK err_injection 00:03:22.725 LINK startup 00:03:22.725 CXX test/cpp_headers/mmio.o 00:03:22.725 CC test/nvme/connect_stress/connect_stress.o 00:03:22.982 CC test/nvme/boot_partition/boot_partition.o 00:03:22.982 LINK reserve 00:03:22.982 CXX test/cpp_headers/nbd.o 00:03:22.982 CXX test/cpp_headers/net.o 00:03:22.982 CXX test/cpp_headers/notify.o 00:03:22.982 LINK simple_copy 00:03:22.982 CC test/nvme/compliance/nvme_compliance.o 00:03:22.982 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:22.982 CC test/nvme/fused_ordering/fused_ordering.o 00:03:22.982 LINK connect_stress 00:03:22.982 LINK boot_partition 00:03:23.240 CXX test/cpp_headers/nvme.o 00:03:23.240 CXX test/cpp_headers/nvme_intel.o 00:03:23.240 CC test/nvme/fdp/fdp.o 00:03:23.240 CC test/nvme/cuse/cuse.o 00:03:23.240 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.240 LINK doorbell_aers 00:03:23.240 LINK fused_ordering 00:03:23.240 LINK nvme_compliance 00:03:23.240 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.240 CXX test/cpp_headers/nvme_spec.o 00:03:23.240 CXX test/cpp_headers/nvme_zns.o 00:03:23.240 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.499 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.499 CXX test/cpp_headers/nvmf.o 00:03:23.499 CXX test/cpp_headers/nvmf_spec.o 00:03:23.499 CC test/bdev/bdevio/bdevio.o 00:03:23.499 CXX test/cpp_headers/nvmf_transport.o 00:03:23.499 LINK fdp 00:03:23.499 CXX test/cpp_headers/opal.o 00:03:23.499 CXX test/cpp_headers/opal_spec.o 00:03:23.499 CXX test/cpp_headers/pci_ids.o 00:03:23.499 CXX test/cpp_headers/pipe.o 00:03:23.499 CXX test/cpp_headers/queue.o 00:03:23.499 CXX test/cpp_headers/reduce.o 00:03:23.499 CXX test/cpp_headers/rpc.o 00:03:23.756 CXX test/cpp_headers/scheduler.o 00:03:23.756 CXX test/cpp_headers/scsi.o 00:03:23.756 CXX test/cpp_headers/scsi_spec.o 00:03:23.756 CXX test/cpp_headers/sock.o 00:03:23.756 CXX test/cpp_headers/stdinc.o 00:03:23.756 CXX test/cpp_headers/string.o 00:03:23.756 LINK bdevio 00:03:23.756 CXX test/cpp_headers/thread.o 00:03:23.756 CXX test/cpp_headers/trace.o 00:03:23.756 CXX test/cpp_headers/trace_parser.o 00:03:23.756 CXX test/cpp_headers/tree.o 00:03:23.756 CXX test/cpp_headers/ublk.o 00:03:23.756 CXX test/cpp_headers/util.o 00:03:24.014 CXX test/cpp_headers/uuid.o 00:03:24.014 CXX test/cpp_headers/version.o 00:03:24.014 CXX test/cpp_headers/vfio_user_pci.o 00:03:24.014 CXX test/cpp_headers/vfio_user_spec.o 00:03:24.014 CXX test/cpp_headers/vhost.o 00:03:24.014 CXX test/cpp_headers/vmd.o 00:03:24.014 CXX test/cpp_headers/xor.o 00:03:24.014 CXX test/cpp_headers/zipf.o 00:03:24.273 LINK cuse 00:03:26.804 LINK esnap 00:03:26.804 00:03:26.804 real 1m0.275s 00:03:26.804 user 5m6.015s 00:03:26.804 sys 1m39.292s 00:03:26.804 21:17:00 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:26.804 ************************************ 00:03:26.804 END TEST make 00:03:26.804 ************************************ 00:03:26.804 21:17:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:27.063 21:17:00 -- common/autotest_common.sh@1142 -- $ return 0 00:03:27.063 21:17:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:27.064 21:17:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:27.064 21:17:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:27.064 21:17:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.064 21:17:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:27.064 21:17:00 -- pm/common@44 -- $ pid=5141 00:03:27.064 21:17:00 -- pm/common@50 -- $ kill -TERM 5141 00:03:27.064 21:17:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.064 21:17:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:27.064 21:17:00 -- pm/common@44 -- $ pid=5143 00:03:27.064 21:17:00 -- pm/common@50 -- $ kill -TERM 5143 00:03:27.064 21:17:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:27.064 21:17:00 -- nvmf/common.sh@7 -- # uname -s 00:03:27.064 21:17:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:27.064 21:17:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:27.064 21:17:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:27.064 21:17:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:27.064 21:17:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:27.064 21:17:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:27.064 21:17:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:27.064 21:17:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:27.064 21:17:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:27.064 21:17:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:27.064 21:17:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:03:27.064 21:17:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:03:27.064 21:17:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:27.064 21:17:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:27.064 21:17:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:27.064 21:17:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:27.064 21:17:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:27.064 21:17:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:27.064 21:17:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.064 21:17:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.064 21:17:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.064 21:17:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.064 21:17:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.064 21:17:00 -- paths/export.sh@5 -- # export PATH 00:03:27.064 21:17:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:27.064 21:17:00 -- nvmf/common.sh@47 -- # : 0 00:03:27.064 21:17:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:27.064 21:17:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:27.064 21:17:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:27.064 21:17:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:27.064 21:17:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:27.064 21:17:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:27.064 21:17:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:27.064 21:17:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:27.064 21:17:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:27.064 21:17:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:27.064 21:17:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:27.064 21:17:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:27.064 21:17:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.064 21:17:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:27.064 21:17:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.064 21:17:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:27.421 21:17:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:27.421 21:17:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:27.421 21:17:00 -- spdk/autotest.sh@48 -- # udevadm_pid=52783 00:03:27.421 21:17:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:27.421 21:17:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:27.421 21:17:00 -- pm/common@17 -- # local monitor 00:03:27.421 21:17:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.421 21:17:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:27.421 21:17:00 -- pm/common@21 -- # date +%s 00:03:27.421 21:17:00 -- pm/common@25 -- # sleep 1 00:03:27.421 21:17:00 -- pm/common@21 -- # date +%s 00:03:27.421 21:17:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721078220 00:03:27.421 21:17:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721078220 00:03:27.421 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721078220_collect-vmstat.pm.log 00:03:27.421 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721078220_collect-cpu-load.pm.log 00:03:28.403 21:17:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.403 21:17:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:28.403 21:17:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:28.403 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:03:28.403 21:17:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:28.403 21:17:01 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:28.403 21:17:01 -- common/autotest_common.sh@10 -- # set +x 00:03:28.403 21:17:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:28.403 21:17:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:28.403 21:17:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:28.403 21:17:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:28.403 21:17:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:28.403 21:17:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:28.403 21:17:01 -- common/autotest_common.sh@1455 -- # uname 00:03:28.403 21:17:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:28.403 21:17:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:28.403 21:17:01 -- common/autotest_common.sh@1475 -- # uname 00:03:28.403 21:17:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:28.403 21:17:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:28.403 21:17:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:28.403 21:17:01 -- spdk/autotest.sh@72 -- # hash lcov 00:03:28.403 21:17:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:28.403 21:17:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:28.403 --rc lcov_branch_coverage=1 00:03:28.403 --rc lcov_function_coverage=1 00:03:28.403 --rc genhtml_branch_coverage=1 00:03:28.403 --rc genhtml_function_coverage=1 00:03:28.403 --rc genhtml_legend=1 00:03:28.403 --rc geninfo_all_blocks=1 00:03:28.403 ' 00:03:28.403 21:17:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:28.403 --rc lcov_branch_coverage=1 00:03:28.403 --rc lcov_function_coverage=1 00:03:28.403 --rc genhtml_branch_coverage=1 00:03:28.403 --rc genhtml_function_coverage=1 00:03:28.403 --rc genhtml_legend=1 00:03:28.403 --rc geninfo_all_blocks=1 00:03:28.403 ' 00:03:28.403 21:17:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:28.403 --rc lcov_branch_coverage=1 00:03:28.403 --rc lcov_function_coverage=1 00:03:28.403 --rc genhtml_branch_coverage=1 00:03:28.403 --rc genhtml_function_coverage=1 00:03:28.403 --rc genhtml_legend=1 00:03:28.403 --rc geninfo_all_blocks=1 00:03:28.403 --no-external' 00:03:28.403 21:17:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:28.403 --rc lcov_branch_coverage=1 00:03:28.403 --rc lcov_function_coverage=1 00:03:28.403 --rc genhtml_branch_coverage=1 00:03:28.403 --rc genhtml_function_coverage=1 00:03:28.403 --rc genhtml_legend=1 00:03:28.403 --rc geninfo_all_blocks=1 00:03:28.403 --no-external' 00:03:28.403 21:17:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:28.403 lcov: LCOV version 1.14 00:03:28.403 21:17:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:43.386 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:43.386 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:55.587 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:55.587 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:55.588 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:55.588 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:58.905 21:17:31 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:58.905 21:17:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.905 21:17:31 -- common/autotest_common.sh@10 -- # set +x 00:03:58.905 21:17:31 -- spdk/autotest.sh@91 -- # rm -f 00:03:58.905 21:17:31 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.472 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:59.472 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:59.472 21:17:32 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:59.472 21:17:32 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:59.472 21:17:32 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:59.472 21:17:32 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:59.472 21:17:32 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.472 21:17:32 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:59.472 21:17:32 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:59.472 21:17:32 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.472 21:17:32 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.472 21:17:32 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.472 21:17:32 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:59.472 21:17:32 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:59.472 21:17:32 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.472 21:17:32 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.472 21:17:32 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.472 21:17:32 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:59.472 21:17:32 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:59.472 21:17:32 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:59.472 21:17:32 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.472 21:17:32 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.472 21:17:32 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:59.472 21:17:32 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:59.472 21:17:32 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:59.472 21:17:32 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.472 21:17:32 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:59.472 21:17:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.472 21:17:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.472 21:17:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:59.472 21:17:32 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:59.472 21:17:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.472 No valid GPT data, bailing 00:03:59.472 21:17:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.472 21:17:32 -- scripts/common.sh@391 -- # pt= 00:03:59.472 21:17:32 -- scripts/common.sh@392 -- # return 1 00:03:59.472 21:17:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.472 1+0 records in 00:03:59.472 1+0 records out 00:03:59.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00854561 s, 123 MB/s 00:03:59.472 21:17:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.472 21:17:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.472 21:17:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:59.472 21:17:32 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:59.472 21:17:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:59.731 No valid GPT data, bailing 00:03:59.731 21:17:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:59.731 21:17:32 -- scripts/common.sh@391 -- # pt= 00:03:59.731 21:17:32 -- scripts/common.sh@392 -- # return 1 00:03:59.731 21:17:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:59.731 1+0 records in 00:03:59.731 1+0 records out 00:03:59.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446057 s, 235 MB/s 00:03:59.731 21:17:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.731 21:17:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.731 21:17:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:59.731 21:17:32 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:59.731 21:17:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:59.731 No valid GPT data, bailing 00:03:59.731 21:17:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:59.731 21:17:32 -- scripts/common.sh@391 -- # pt= 00:03:59.731 21:17:32 -- scripts/common.sh@392 -- # return 1 00:03:59.731 21:17:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:59.731 1+0 records in 00:03:59.731 1+0 records out 00:03:59.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552331 s, 190 MB/s 00:03:59.731 21:17:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.731 21:17:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.731 21:17:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:59.731 21:17:32 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:59.731 21:17:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:59.731 No valid GPT data, bailing 00:03:59.731 21:17:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:59.731 21:17:33 -- scripts/common.sh@391 -- # pt= 00:03:59.731 21:17:33 -- scripts/common.sh@392 -- # return 1 00:03:59.731 21:17:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:59.731 1+0 records in 00:03:59.731 1+0 records out 00:03:59.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573613 s, 183 MB/s 00:03:59.731 21:17:33 -- spdk/autotest.sh@118 -- # sync 00:03:59.731 21:17:33 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:59.731 21:17:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:59.731 21:17:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.261 21:17:35 -- spdk/autotest.sh@124 -- # uname -s 00:04:02.261 21:17:35 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:02.261 21:17:35 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:02.261 21:17:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.261 21:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.261 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:04:02.261 ************************************ 00:04:02.261 START TEST setup.sh 00:04:02.261 ************************************ 00:04:02.261 21:17:35 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:02.519 * Looking for test storage... 00:04:02.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.519 21:17:35 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:02.519 21:17:35 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:02.519 21:17:35 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:02.519 21:17:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.519 21:17:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.519 21:17:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.519 ************************************ 00:04:02.519 START TEST acl 00:04:02.520 ************************************ 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:02.520 * Looking for test storage... 00:04:02.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.520 21:17:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:02.520 21:17:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.520 21:17:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:02.520 21:17:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:02.520 21:17:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:02.520 21:17:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:02.520 21:17:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:02.520 21:17:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.520 21:17:35 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.459 21:17:36 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:03.459 21:17:36 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:03.459 21:17:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.459 21:17:36 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:03.459 21:17:36 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.459 21:17:36 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.401 Hugepages 00:04:04.401 node hugesize free / total 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.401 00:04:04.401 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:04.401 21:17:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:04.658 21:17:37 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:04.658 21:17:37 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.658 21:17:37 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.658 21:17:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:04.658 ************************************ 00:04:04.658 START TEST denied 00:04:04.658 ************************************ 00:04:04.658 21:17:37 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:04.658 21:17:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:04.658 21:17:37 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:04.658 21:17:37 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:04.658 21:17:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.658 21:17:37 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.032 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:06.032 21:17:38 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:06.032 21:17:38 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:06.032 21:17:38 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.032 21:17:38 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:06.032 21:17:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:06.032 21:17:39 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.032 21:17:39 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.032 21:17:39 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:06.032 21:17:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.032 21:17:39 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.596 00:04:06.596 real 0m1.825s 00:04:06.596 user 0m0.664s 00:04:06.596 sys 0m1.135s 00:04:06.596 21:17:39 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.596 ************************************ 00:04:06.596 END TEST denied 00:04:06.596 ************************************ 00:04:06.596 21:17:39 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:06.597 21:17:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:06.597 21:17:39 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:06.597 21:17:39 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.597 21:17:39 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.597 21:17:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.597 ************************************ 00:04:06.597 START TEST allowed 00:04:06.597 ************************************ 00:04:06.597 21:17:39 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:06.597 21:17:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:06.597 21:17:39 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:06.597 21:17:39 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:06.597 21:17:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.597 21:17:39 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.529 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.529 21:17:40 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.466 00:04:08.466 real 0m1.923s 00:04:08.466 user 0m0.713s 00:04:08.466 sys 0m1.228s 00:04:08.466 21:17:41 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.466 21:17:41 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:08.466 ************************************ 00:04:08.466 END TEST allowed 00:04:08.466 ************************************ 00:04:08.466 21:17:41 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:08.466 ************************************ 00:04:08.466 END TEST acl 00:04:08.466 ************************************ 00:04:08.466 00:04:08.466 real 0m6.098s 00:04:08.466 user 0m2.319s 00:04:08.466 sys 0m3.806s 00:04:08.466 21:17:41 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.466 21:17:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:08.726 21:17:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.726 21:17:41 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:08.726 21:17:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.726 21:17:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.726 21:17:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.726 ************************************ 00:04:08.726 START TEST hugepages 00:04:08.726 ************************************ 00:04:08.726 21:17:41 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:08.726 * Looking for test storage... 00:04:08.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.726 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6013836 kB' 'MemAvailable: 7395768 kB' 'Buffers: 2436 kB' 'Cached: 1596508 kB' 'SwapCached: 0 kB' 'Active: 443128 kB' 'Inactive: 1267376 kB' 'Active(anon): 122300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 113688 kB' 'Mapped: 48696 kB' 'Shmem: 10480 kB' 'KReclaimable: 61336 kB' 'Slab: 135200 kB' 'SReclaimable: 61336 kB' 'SUnreclaim: 73864 kB' 'KernelStack: 6248 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 344408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.727 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.728 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.987 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.987 21:17:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:08.987 21:17:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.987 21:17:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.987 21:17:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.987 ************************************ 00:04:08.987 START TEST default_setup 00:04:08.987 ************************************ 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.987 21:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.926 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.926 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109484 kB' 'MemAvailable: 9491308 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 452912 kB' 'Inactive: 1267388 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123164 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134980 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6272 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.926 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:09.927 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109484 kB' 'MemAvailable: 9491312 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 452708 kB' 'Inactive: 1267392 kB' 'Active(anon): 131880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122780 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134976 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73884 kB' 'KernelStack: 6224 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.928 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.929 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109484 kB' 'MemAvailable: 9491312 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 452820 kB' 'Inactive: 1267392 kB' 'Active(anon): 131992 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134964 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73872 kB' 'KernelStack: 6256 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.930 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.931 nr_hugepages=1024 00:04:09.931 resv_hugepages=0 00:04:09.931 surplus_hugepages=0 00:04:09.931 anon_hugepages=0 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.931 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109484 kB' 'MemAvailable: 9491312 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 452380 kB' 'Inactive: 1267392 kB' 'Active(anon): 131552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122740 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134960 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73868 kB' 'KernelStack: 6272 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.192 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.193 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8109484 kB' 'MemUsed: 4132492 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 1267392 kB' 'Active(anon): 131556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1598684 kB' 'Mapped: 48556 kB' 'AnonPages: 122712 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61092 kB' 'Slab: 134956 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.194 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.195 node0=1024 expecting 1024 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.195 00:04:10.195 real 0m1.224s 00:04:10.195 user 0m0.507s 00:04:10.195 sys 0m0.676s 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.195 ************************************ 00:04:10.195 END TEST default_setup 00:04:10.195 ************************************ 00:04:10.195 21:17:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:10.195 21:17:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.195 21:17:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:10.195 21:17:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.195 21:17:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.195 21:17:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.195 ************************************ 00:04:10.195 START TEST per_node_1G_alloc 00:04:10.195 ************************************ 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.195 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.765 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.765 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9157692 kB' 'MemAvailable: 10539520 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 453000 kB' 'Inactive: 1267392 kB' 'Active(anon): 132172 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134980 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6212 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.765 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.766 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.767 21:17:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9157692 kB' 'MemAvailable: 10539520 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 452432 kB' 'Inactive: 1267392 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122992 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134988 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73896 kB' 'KernelStack: 6272 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.767 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.768 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9157692 kB' 'MemAvailable: 10539520 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 452464 kB' 'Inactive: 1267392 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123000 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134980 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6272 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.769 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.770 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.771 nr_hugepages=512 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:10.771 resv_hugepages=0 00:04:10.771 surplus_hugepages=0 00:04:10.771 anon_hugepages=0 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9158764 kB' 'MemAvailable: 10540592 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 452424 kB' 'Inactive: 1267392 kB' 'Active(anon): 131596 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122992 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134980 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6272 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.771 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.772 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9159132 kB' 'MemUsed: 3082844 kB' 'SwapCached: 0 kB' 'Active: 452636 kB' 'Inactive: 1267392 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1598684 kB' 'Mapped: 48556 kB' 'AnonPages: 122936 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61092 kB' 'Slab: 134980 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.032 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.033 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.034 node0=512 expecting 512 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.034 00:04:11.034 real 0m0.770s 00:04:11.034 user 0m0.332s 00:04:11.034 sys 0m0.450s 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.034 21:17:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.034 ************************************ 00:04:11.034 END TEST per_node_1G_alloc 00:04:11.034 ************************************ 00:04:11.034 21:17:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.034 21:17:44 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:11.034 21:17:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.034 21:17:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.034 21:17:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.034 ************************************ 00:04:11.034 START TEST even_2G_alloc 00:04:11.034 ************************************ 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.034 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.604 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.604 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493740 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 452668 kB' 'Inactive: 1267396 kB' 'Active(anon): 131840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48676 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 135000 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73908 kB' 'KernelStack: 6288 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.604 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493740 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 452372 kB' 'Inactive: 1267396 kB' 'Active(anon): 131544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134980 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73888 kB' 'KernelStack: 6256 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.605 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.606 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493740 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 452380 kB' 'Inactive: 1267396 kB' 'Active(anon): 131552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134972 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6256 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.607 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.608 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.609 nr_hugepages=1024 00:04:11.609 resv_hugepages=0 00:04:11.609 surplus_hugepages=0 00:04:11.609 anon_hugepages=0 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemAvailable: 9493740 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 452388 kB' 'Inactive: 1267396 kB' 'Active(anon): 131560 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 122932 kB' 'Mapped: 48556 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134972 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73880 kB' 'KernelStack: 6256 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.609 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.610 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8111908 kB' 'MemUsed: 4130068 kB' 'SwapCached: 0 kB' 'Active: 452392 kB' 'Inactive: 1267396 kB' 'Active(anon): 131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1598688 kB' 'Mapped: 48556 kB' 'AnonPages: 122932 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61092 kB' 'Slab: 134972 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.611 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.612 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.870 node0=1024 expecting 1024 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.870 00:04:11.870 real 0m0.734s 00:04:11.870 user 0m0.347s 00:04:11.870 sys 0m0.412s 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.870 21:17:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.870 ************************************ 00:04:11.870 END TEST even_2G_alloc 00:04:11.870 ************************************ 00:04:11.870 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.870 21:17:45 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:11.870 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.870 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.870 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.870 ************************************ 00:04:11.870 START TEST odd_alloc 00:04:11.870 ************************************ 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.870 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.395 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.395 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8116924 kB' 'MemAvailable: 9498756 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 453032 kB' 'Inactive: 1267396 kB' 'Active(anon): 132204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123360 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134996 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73904 kB' 'KernelStack: 6244 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.395 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.396 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.397 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.398 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.402 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.402 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.403 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8117176 kB' 'MemAvailable: 9499008 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 452456 kB' 'Inactive: 1267396 kB' 'Active(anon): 131628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122812 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 135004 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73912 kB' 'KernelStack: 6272 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.404 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.405 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.406 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.410 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.411 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.412 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.413 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8116384 kB' 'MemAvailable: 9498216 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 452884 kB' 'Inactive: 1267396 kB' 'Active(anon): 132056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122988 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 135004 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73912 kB' 'KernelStack: 6288 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.414 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.415 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.420 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:12.421 nr_hugepages=1025 00:04:12.421 resv_hugepages=0 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.421 surplus_hugepages=0 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.421 anon_hugepages=0 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.421 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8116384 kB' 'MemAvailable: 9498216 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 452616 kB' 'Inactive: 1267396 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122724 kB' 'Mapped: 48560 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134988 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73896 kB' 'KernelStack: 6224 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.422 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8116132 kB' 'MemUsed: 4125844 kB' 'SwapCached: 0 kB' 'Active: 452368 kB' 'Inactive: 1267392 kB' 'Active(anon): 131540 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1598684 kB' 'Mapped: 48560 kB' 'AnonPages: 122752 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61092 kB' 'Slab: 134984 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.423 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:12.424 node0=1025 expecting 1025 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:12.424 00:04:12.424 real 0m0.621s 00:04:12.424 user 0m0.295s 00:04:12.424 sys 0m0.372s 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.424 21:17:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.424 ************************************ 00:04:12.424 END TEST odd_alloc 00:04:12.424 ************************************ 00:04:12.424 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:12.424 21:17:45 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:12.424 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.424 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.424 21:17:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.424 ************************************ 00:04:12.424 START TEST custom_alloc 00:04:12.424 ************************************ 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.424 21:17:45 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.004 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.004 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161860 kB' 'MemAvailable: 10543692 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 448024 kB' 'Inactive: 1267396 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118560 kB' 'Mapped: 48016 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134920 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73828 kB' 'KernelStack: 6164 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.004 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161860 kB' 'MemAvailable: 10543692 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 448032 kB' 'Inactive: 1267396 kB' 'Active(anon): 127204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118304 kB' 'Mapped: 47880 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134920 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73828 kB' 'KernelStack: 6192 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.005 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.006 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161860 kB' 'MemAvailable: 10543692 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 447904 kB' 'Inactive: 1267396 kB' 'Active(anon): 127076 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118212 kB' 'Mapped: 47880 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134912 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73820 kB' 'KernelStack: 6192 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.007 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.008 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:13.009 nr_hugepages=512 00:04:13.009 resv_hugepages=0 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.009 surplus_hugepages=0 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.009 anon_hugepages=0 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161860 kB' 'MemAvailable: 10543692 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 448424 kB' 'Inactive: 1267396 kB' 'Active(anon): 127596 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 118776 kB' 'Mapped: 47880 kB' 'Shmem: 10464 kB' 'KReclaimable: 61092 kB' 'Slab: 134912 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73820 kB' 'KernelStack: 6192 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.009 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.010 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.011 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9161860 kB' 'MemUsed: 3080116 kB' 'SwapCached: 0 kB' 'Active: 448096 kB' 'Inactive: 1267396 kB' 'Active(anon): 127268 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 1598688 kB' 'Mapped: 47880 kB' 'AnonPages: 118400 kB' 'Shmem: 10464 kB' 'KernelStack: 6160 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61092 kB' 'Slab: 134896 kB' 'SReclaimable: 61092 kB' 'SUnreclaim: 73804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.270 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.271 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.272 node0=512 expecting 512 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.272 00:04:13.272 real 0m0.668s 00:04:13.272 user 0m0.301s 00:04:13.272 sys 0m0.404s 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.272 21:17:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.272 ************************************ 00:04:13.272 END TEST custom_alloc 00:04:13.272 ************************************ 00:04:13.272 21:17:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.272 21:17:46 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.272 21:17:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.272 21:17:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.272 21:17:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.272 ************************************ 00:04:13.272 START TEST no_shrink_alloc 00:04:13.272 ************************************ 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.272 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.792 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.792 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.792 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113176 kB' 'MemAvailable: 9495000 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 448184 kB' 'Inactive: 1267396 kB' 'Active(anon): 127356 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 118544 kB' 'Mapped: 47916 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134860 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6176 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.793 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113176 kB' 'MemAvailable: 9495000 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 447996 kB' 'Inactive: 1267396 kB' 'Active(anon): 127168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 118396 kB' 'Mapped: 47820 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134860 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6220 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.794 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.795 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113176 kB' 'MemAvailable: 9495000 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 447996 kB' 'Inactive: 1267396 kB' 'Active(anon): 127168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118400 kB' 'Mapped: 47820 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134860 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6220 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.796 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.797 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.798 nr_hugepages=1024 00:04:13.798 resv_hugepages=0 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.798 surplus_hugepages=0 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.798 anon_hugepages=0 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113176 kB' 'MemAvailable: 9495000 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 447824 kB' 'Inactive: 1267396 kB' 'Active(anon): 126996 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118192 kB' 'Mapped: 47820 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134860 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6204 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.798 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.799 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8113176 kB' 'MemUsed: 4128800 kB' 'SwapCached: 0 kB' 'Active: 448040 kB' 'Inactive: 1267396 kB' 'Active(anon): 127212 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1598688 kB' 'Mapped: 47820 kB' 'AnonPages: 118400 kB' 'Shmem: 10464 kB' 'KernelStack: 6220 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61080 kB' 'Slab: 134860 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.800 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.801 node0=1024 expecting 1024 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.801 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.371 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.371 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.371 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:14.371 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8114760 kB' 'MemAvailable: 9496584 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 448288 kB' 'Inactive: 1267396 kB' 'Active(anon): 127460 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118588 kB' 'Mapped: 47924 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134848 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73768 kB' 'KernelStack: 6180 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.371 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.372 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8114760 kB' 'MemAvailable: 9496584 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 448336 kB' 'Inactive: 1267396 kB' 'Active(anon): 127508 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118612 kB' 'Mapped: 47864 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134848 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73768 kB' 'KernelStack: 6164 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 337780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.373 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.374 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8114764 kB' 'MemAvailable: 9496584 kB' 'Buffers: 2436 kB' 'Cached: 1596248 kB' 'SwapCached: 0 kB' 'Active: 448012 kB' 'Inactive: 1267392 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118300 kB' 'Mapped: 47820 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134848 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73768 kB' 'KernelStack: 6128 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.375 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:14.376 nr_hugepages=1024 00:04:14.376 resv_hugepages=0 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.376 surplus_hugepages=0 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.376 anon_hugepages=0 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.376 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8114764 kB' 'MemAvailable: 9496588 kB' 'Buffers: 2436 kB' 'Cached: 1596252 kB' 'SwapCached: 0 kB' 'Active: 447936 kB' 'Inactive: 1267396 kB' 'Active(anon): 127108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 118328 kB' 'Mapped: 47820 kB' 'Shmem: 10464 kB' 'KReclaimable: 61080 kB' 'Slab: 134816 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73736 kB' 'KernelStack: 6160 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 153452 kB' 'DirectMap2M: 5089280 kB' 'DirectMap1G: 9437184 kB' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.377 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.637 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.638 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8114764 kB' 'MemUsed: 4127212 kB' 'SwapCached: 0 kB' 'Active: 447656 kB' 'Inactive: 1267396 kB' 'Active(anon): 126828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320828 kB' 'Inactive(file): 1267396 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1598688 kB' 'Mapped: 47820 kB' 'AnonPages: 118036 kB' 'Shmem: 10464 kB' 'KernelStack: 6144 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61080 kB' 'Slab: 134816 kB' 'SReclaimable: 61080 kB' 'SUnreclaim: 73736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.639 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:14.640 node0=1024 expecting 1024 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:14.640 00:04:14.640 real 0m1.332s 00:04:14.640 user 0m0.642s 00:04:14.640 sys 0m0.782s 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.640 21:17:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:14.640 ************************************ 00:04:14.640 END TEST no_shrink_alloc 00:04:14.640 ************************************ 00:04:14.640 21:17:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.640 21:17:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.640 00:04:14.640 real 0m5.936s 00:04:14.640 user 0m2.638s 00:04:14.640 sys 0m3.461s 00:04:14.640 21:17:47 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.640 21:17:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.640 ************************************ 00:04:14.640 END TEST hugepages 00:04:14.640 ************************************ 00:04:14.640 21:17:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.640 21:17:47 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.640 21:17:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.640 21:17:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.640 21:17:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.640 ************************************ 00:04:14.640 START TEST driver 00:04:14.640 ************************************ 00:04:14.640 21:17:47 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.898 * Looking for test storage... 00:04:14.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.898 21:17:48 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:14.898 21:17:48 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.898 21:17:48 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.464 21:17:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:15.464 21:17:48 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.464 21:17:48 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.464 21:17:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:15.464 ************************************ 00:04:15.464 START TEST guess_driver 00:04:15.464 ************************************ 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:15.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:15.464 Looking for driver=uio_pci_generic 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.464 21:17:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:16.403 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.661 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.661 21:17:49 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:16.661 21:17:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.661 21:17:49 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.228 00:04:17.228 real 0m1.826s 00:04:17.228 user 0m0.659s 00:04:17.228 sys 0m1.223s 00:04:17.228 21:17:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.228 21:17:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.228 ************************************ 00:04:17.228 END TEST guess_driver 00:04:17.228 ************************************ 00:04:17.487 21:17:50 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:17.487 00:04:17.487 real 0m2.732s 00:04:17.487 user 0m0.992s 00:04:17.487 sys 0m1.901s 00:04:17.487 21:17:50 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.487 21:17:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.487 ************************************ 00:04:17.487 END TEST driver 00:04:17.487 ************************************ 00:04:17.487 21:17:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:17.487 21:17:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:17.487 21:17:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.487 21:17:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.487 21:17:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.487 ************************************ 00:04:17.487 START TEST devices 00:04:17.487 ************************************ 00:04:17.487 21:17:50 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:17.487 * Looking for test storage... 00:04:17.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.487 21:17:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:17.487 21:17:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:17.487 21:17:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.487 21:17:50 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.422 21:17:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.422 21:17:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.423 21:17:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:18.423 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:18.423 21:17:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:18.423 21:17:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:18.423 No valid GPT data, bailing 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:18.682 No valid GPT data, bailing 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:18.682 No valid GPT data, bailing 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:18.682 21:17:51 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:18.682 21:17:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:18.682 21:17:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:18.682 No valid GPT data, bailing 00:04:18.682 21:17:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.682 21:17:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.682 21:17:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.682 21:17:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:18.682 21:17:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:18.682 21:17:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:18.682 21:17:52 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:18.682 21:17:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:18.682 21:17:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.682 21:17:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:18.682 21:17:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:18.682 21:17:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:18.682 21:17:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:18.682 21:17:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.682 21:17:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.682 21:17:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.682 ************************************ 00:04:18.682 START TEST nvme_mount 00:04:18.682 ************************************ 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:18.682 21:17:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:20.059 Creating new GPT entries in memory. 00:04:20.059 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:20.059 other utilities. 00:04:20.059 21:17:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:20.059 21:17:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.059 21:17:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.059 21:17:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.059 21:17:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:20.994 Creating new GPT entries in memory. 00:04:20.994 The operation has completed successfully. 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57002 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.994 21:17:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.252 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.252 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.252 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.252 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.252 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.252 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.511 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.769 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.769 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.769 21:17:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.027 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.027 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.027 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.027 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.027 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.284 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.284 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:22.284 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.284 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.284 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.284 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.541 21:17:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.118 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.377 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.377 00:04:23.377 real 0m4.555s 00:04:23.377 user 0m0.855s 00:04:23.377 sys 0m1.452s 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.377 21:17:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:23.377 ************************************ 00:04:23.377 END TEST nvme_mount 00:04:23.377 ************************************ 00:04:23.377 21:17:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:23.377 21:17:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:23.377 21:17:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.377 21:17:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.377 21:17:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.377 ************************************ 00:04:23.377 START TEST dm_mount 00:04:23.377 ************************************ 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.377 21:17:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:24.752 Creating new GPT entries in memory. 00:04:24.752 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.752 other utilities. 00:04:24.752 21:17:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.752 21:17:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.752 21:17:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.752 21:17:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.752 21:17:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:25.690 Creating new GPT entries in memory. 00:04:25.690 The operation has completed successfully. 00:04:25.690 21:17:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.690 21:17:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.690 21:17:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.690 21:17:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.690 21:17:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:26.627 The operation has completed successfully. 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57435 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.627 21:17:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.896 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.896 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:26.896 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:26.896 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.896 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.896 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.167 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.167 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.167 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.167 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.426 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.426 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.427 21:18:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.686 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.686 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:27.686 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:27.686 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.686 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.686 21:18:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.945 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.945 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.945 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.945 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:28.205 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:28.205 00:04:28.205 real 0m4.749s 00:04:28.205 user 0m0.594s 00:04:28.205 sys 0m1.097s 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.205 21:18:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:28.205 ************************************ 00:04:28.205 END TEST dm_mount 00:04:28.205 ************************************ 00:04:28.205 21:18:01 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:28.205 21:18:01 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:28.205 21:18:01 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:28.205 21:18:01 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.205 21:18:01 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.205 21:18:01 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.205 21:18:01 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.205 21:18:01 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.464 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.464 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.464 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:28.464 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:28.464 21:18:01 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:28.464 21:18:01 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.464 21:18:01 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.464 21:18:01 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.464 21:18:01 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.464 21:18:01 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.464 21:18:01 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:28.464 00:04:28.464 real 0m11.096s 00:04:28.464 user 0m2.124s 00:04:28.464 sys 0m3.383s 00:04:28.464 21:18:01 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.464 ************************************ 00:04:28.464 END TEST devices 00:04:28.464 21:18:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:28.464 ************************************ 00:04:28.722 21:18:01 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:28.722 00:04:28.722 real 0m26.252s 00:04:28.722 user 0m8.217s 00:04:28.722 sys 0m12.810s 00:04:28.722 21:18:01 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.722 21:18:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:28.722 ************************************ 00:04:28.722 END TEST setup.sh 00:04:28.722 ************************************ 00:04:28.722 21:18:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:28.722 21:18:01 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:29.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.659 Hugepages 00:04:29.659 node hugesize free / total 00:04:29.659 node0 1048576kB 0 / 0 00:04:29.659 node0 2048kB 2048 / 2048 00:04:29.659 00:04:29.659 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.659 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:29.659 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:29.918 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:29.918 21:18:03 -- spdk/autotest.sh@130 -- # uname -s 00:04:29.918 21:18:03 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:29.918 21:18:03 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:29.918 21:18:03 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.744 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.744 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.744 21:18:04 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:32.120 21:18:05 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:32.120 21:18:05 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:32.120 21:18:05 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.120 21:18:05 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:32.120 21:18:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:32.120 21:18:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:32.120 21:18:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.120 21:18:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.120 21:18:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:32.120 21:18:05 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:32.120 21:18:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:32.120 21:18:05 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.378 Waiting for block devices as requested 00:04:32.636 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.636 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:32.636 21:18:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:32.636 21:18:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:32.636 21:18:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:32.636 21:18:05 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:32.636 21:18:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:32.636 21:18:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:32.636 21:18:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:32.917 21:18:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:32.917 21:18:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:32.917 21:18:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:32.917 21:18:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:32.917 21:18:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:32.917 21:18:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:32.917 21:18:06 -- common/autotest_common.sh@1557 -- # continue 00:04:32.917 21:18:06 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:32.917 21:18:06 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:32.917 21:18:06 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:32.917 21:18:06 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:32.917 21:18:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:32.917 21:18:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:32.917 21:18:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:32.917 21:18:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:32.917 21:18:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:32.917 21:18:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:32.917 21:18:06 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:32.917 21:18:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:32.917 21:18:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:32.917 21:18:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:32.917 21:18:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:32.917 21:18:06 -- common/autotest_common.sh@1557 -- # continue 00:04:32.917 21:18:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:32.917 21:18:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.917 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:04:32.917 21:18:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:32.917 21:18:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.917 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:04:32.917 21:18:06 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.882 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.882 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.882 21:18:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:33.882 21:18:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.882 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:04:33.882 21:18:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:33.882 21:18:07 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:33.882 21:18:07 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:33.882 21:18:07 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:33.882 21:18:07 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:33.882 21:18:07 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:33.882 21:18:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:33.882 21:18:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:33.882 21:18:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.882 21:18:07 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:33.882 21:18:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:34.140 21:18:07 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:34.140 21:18:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:34.140 21:18:07 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:34.140 21:18:07 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:34.140 21:18:07 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:34.140 21:18:07 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:34.140 21:18:07 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:34.140 21:18:07 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:34.140 21:18:07 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:34.140 21:18:07 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:34.140 21:18:07 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:34.140 21:18:07 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:34.140 21:18:07 -- common/autotest_common.sh@1593 -- # return 0 00:04:34.140 21:18:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:34.140 21:18:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:34.140 21:18:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.140 21:18:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.140 21:18:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:34.140 21:18:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.140 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:04:34.140 21:18:07 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:34.140 21:18:07 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:34.140 21:18:07 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:34.140 21:18:07 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:34.140 21:18:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.140 21:18:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.140 21:18:07 -- common/autotest_common.sh@10 -- # set +x 00:04:34.140 ************************************ 00:04:34.140 START TEST env 00:04:34.140 ************************************ 00:04:34.140 21:18:07 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:34.140 * Looking for test storage... 00:04:34.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:34.140 21:18:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.140 21:18:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.140 21:18:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.140 21:18:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.140 ************************************ 00:04:34.140 START TEST env_memory 00:04:34.140 ************************************ 00:04:34.140 21:18:07 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.398 00:04:34.398 00:04:34.398 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.398 http://cunit.sourceforge.net/ 00:04:34.398 00:04:34.398 00:04:34.398 Suite: memory 00:04:34.398 Test: alloc and free memory map ...[2024-07-15 21:18:07.544347] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:34.398 passed 00:04:34.398 Test: mem map translation ...[2024-07-15 21:18:07.564542] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:34.398 [2024-07-15 21:18:07.564568] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:34.398 [2024-07-15 21:18:07.564605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:34.398 [2024-07-15 21:18:07.564613] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:34.398 passed 00:04:34.398 Test: mem map registration ...[2024-07-15 21:18:07.602486] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:34.398 [2024-07-15 21:18:07.602520] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:34.398 passed 00:04:34.398 Test: mem map adjacent registrations ...passed 00:04:34.398 00:04:34.398 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.398 suites 1 1 n/a 0 0 00:04:34.398 tests 4 4 4 0 0 00:04:34.398 asserts 152 152 152 0 n/a 00:04:34.398 00:04:34.398 Elapsed time = 0.138 seconds 00:04:34.398 00:04:34.398 real 0m0.157s 00:04:34.398 user 0m0.144s 00:04:34.398 sys 0m0.009s 00:04:34.398 21:18:07 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.398 21:18:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:34.398 ************************************ 00:04:34.398 END TEST env_memory 00:04:34.398 ************************************ 00:04:34.398 21:18:07 env -- common/autotest_common.sh@1142 -- # return 0 00:04:34.398 21:18:07 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:34.398 21:18:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.398 21:18:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.398 21:18:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.398 ************************************ 00:04:34.398 START TEST env_vtophys 00:04:34.398 ************************************ 00:04:34.398 21:18:07 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:34.398 EAL: lib.eal log level changed from notice to debug 00:04:34.398 EAL: Detected lcore 0 as core 0 on socket 0 00:04:34.398 EAL: Detected lcore 1 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 2 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 3 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 4 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 5 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 6 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 7 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 8 as core 0 on socket 0 00:04:34.399 EAL: Detected lcore 9 as core 0 on socket 0 00:04:34.399 EAL: Maximum logical cores by configuration: 128 00:04:34.399 EAL: Detected CPU lcores: 10 00:04:34.399 EAL: Detected NUMA nodes: 1 00:04:34.399 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:34.399 EAL: Detected shared linkage of DPDK 00:04:34.399 EAL: No shared files mode enabled, IPC will be disabled 00:04:34.399 EAL: Selected IOVA mode 'PA' 00:04:34.399 EAL: Probing VFIO support... 00:04:34.399 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:34.399 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:34.399 EAL: Ask a virtual area of 0x2e000 bytes 00:04:34.399 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:34.399 EAL: Setting up physically contiguous memory... 00:04:34.399 EAL: Setting maximum number of open files to 524288 00:04:34.399 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:34.399 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:34.399 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.399 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:34.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.657 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:34.657 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:34.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.657 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:34.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.657 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:34.657 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:34.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.657 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:34.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.657 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:34.657 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:34.657 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.657 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:34.657 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.657 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.657 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:34.658 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:34.658 EAL: Hugepages will be freed exactly as allocated. 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: TSC frequency is ~2490000 KHz 00:04:34.658 EAL: Main lcore 0 is ready (tid=7faf70ff9a00;cpuset=[0]) 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 0 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 2MB 00:04:34.658 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:34.658 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:34.658 EAL: Mem event callback 'spdk:(nil)' registered 00:04:34.658 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:34.658 00:04:34.658 00:04:34.658 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.658 http://cunit.sourceforge.net/ 00:04:34.658 00:04:34.658 00:04:34.658 Suite: components_suite 00:04:34.658 Test: vtophys_malloc_test ...passed 00:04:34.658 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 4 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 4MB 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was shrunk by 4MB 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 4 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 6MB 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was shrunk by 6MB 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 4 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 10MB 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was shrunk by 10MB 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 4 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 18MB 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was shrunk by 18MB 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 4 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 4 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.658 EAL: Restoring previous memory policy: 4 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.658 EAL: request: mp_malloc_sync 00:04:34.658 EAL: No shared files mode enabled, IPC is disabled 00:04:34.658 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.658 EAL: Trying to obtain current memory policy. 00:04:34.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.917 EAL: Restoring previous memory policy: 4 00:04:34.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.917 EAL: request: mp_malloc_sync 00:04:34.917 EAL: No shared files mode enabled, IPC is disabled 00:04:34.917 EAL: Heap on socket 0 was expanded by 258MB 00:04:34.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.917 EAL: request: mp_malloc_sync 00:04:34.917 EAL: No shared files mode enabled, IPC is disabled 00:04:34.917 EAL: Heap on socket 0 was shrunk by 258MB 00:04:34.917 EAL: Trying to obtain current memory policy. 00:04:34.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.917 EAL: Restoring previous memory policy: 4 00:04:34.917 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.917 EAL: request: mp_malloc_sync 00:04:34.917 EAL: No shared files mode enabled, IPC is disabled 00:04:34.917 EAL: Heap on socket 0 was expanded by 514MB 00:04:35.175 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.175 EAL: request: mp_malloc_sync 00:04:35.175 EAL: No shared files mode enabled, IPC is disabled 00:04:35.175 EAL: Heap on socket 0 was shrunk by 514MB 00:04:35.175 EAL: Trying to obtain current memory policy. 00:04:35.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.433 EAL: Restoring previous memory policy: 4 00:04:35.433 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.434 EAL: request: mp_malloc_sync 00:04:35.434 EAL: No shared files mode enabled, IPC is disabled 00:04:35.434 EAL: Heap on socket 0 was expanded by 1026MB 00:04:35.434 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.692 passedEAL: request: mp_malloc_sync 00:04:35.692 EAL: No shared files mode enabled, IPC is disabled 00:04:35.692 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.692 00:04:35.692 00:04:35.692 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.692 suites 1 1 n/a 0 0 00:04:35.692 tests 2 2 2 0 0 00:04:35.692 asserts 5253 5253 5253 0 n/a 00:04:35.692 00:04:35.692 Elapsed time = 0.979 seconds 00:04:35.692 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.692 EAL: request: mp_malloc_sync 00:04:35.692 EAL: No shared files mode enabled, IPC is disabled 00:04:35.692 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.692 EAL: No shared files mode enabled, IPC is disabled 00:04:35.692 EAL: No shared files mode enabled, IPC is disabled 00:04:35.692 EAL: No shared files mode enabled, IPC is disabled 00:04:35.692 00:04:35.692 real 0m1.176s 00:04:35.692 user 0m0.632s 00:04:35.692 sys 0m0.418s 00:04:35.692 ************************************ 00:04:35.692 END TEST env_vtophys 00:04:35.692 ************************************ 00:04:35.692 21:18:08 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.692 21:18:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:35.692 21:18:08 env -- common/autotest_common.sh@1142 -- # return 0 00:04:35.692 21:18:08 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.692 21:18:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.692 21:18:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.692 21:18:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.692 ************************************ 00:04:35.692 START TEST env_pci 00:04:35.692 ************************************ 00:04:35.692 21:18:08 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:35.692 00:04:35.692 00:04:35.692 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.692 http://cunit.sourceforge.net/ 00:04:35.692 00:04:35.692 00:04:35.692 Suite: pci 00:04:35.692 Test: pci_hook ...[2024-07-15 21:18:08.981961] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58639 has claimed it 00:04:35.692 passed 00:04:35.692 00:04:35.692 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.692 suites 1 1 n/a 0 0 00:04:35.692 tests 1 1 1 0 0 00:04:35.692 asserts 25 25 25 0 n/a 00:04:35.692 00:04:35.692 Elapsed time = 0.004 seconds 00:04:35.692 EAL: Cannot find device (10000:00:01.0) 00:04:35.692 EAL: Failed to attach device on primary process 00:04:35.692 00:04:35.692 real 0m0.021s 00:04:35.692 user 0m0.007s 00:04:35.692 sys 0m0.014s 00:04:35.692 21:18:08 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.692 ************************************ 00:04:35.692 END TEST env_pci 00:04:35.692 ************************************ 00:04:35.692 21:18:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:35.692 21:18:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:35.692 21:18:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.692 21:18:09 env -- env/env.sh@15 -- # uname 00:04:35.692 21:18:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.692 21:18:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.692 21:18:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.692 21:18:09 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:35.692 21:18:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.692 21:18:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.951 ************************************ 00:04:35.951 START TEST env_dpdk_post_init 00:04:35.951 ************************************ 00:04:35.951 21:18:09 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.951 EAL: Detected CPU lcores: 10 00:04:35.951 EAL: Detected NUMA nodes: 1 00:04:35.951 EAL: Detected shared linkage of DPDK 00:04:35.951 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.951 EAL: Selected IOVA mode 'PA' 00:04:35.951 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.951 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:35.951 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:35.951 Starting DPDK initialization... 00:04:35.951 Starting SPDK post initialization... 00:04:35.951 SPDK NVMe probe 00:04:35.951 Attaching to 0000:00:10.0 00:04:35.951 Attaching to 0000:00:11.0 00:04:35.951 Attached to 0000:00:10.0 00:04:35.951 Attached to 0000:00:11.0 00:04:35.951 Cleaning up... 00:04:35.951 00:04:35.951 real 0m0.189s 00:04:35.951 user 0m0.050s 00:04:35.951 sys 0m0.039s 00:04:35.951 21:18:09 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.951 21:18:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.951 ************************************ 00:04:35.951 END TEST env_dpdk_post_init 00:04:35.951 ************************************ 00:04:35.951 21:18:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:35.951 21:18:09 env -- env/env.sh@26 -- # uname 00:04:36.209 21:18:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.209 21:18:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.209 21:18:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.209 21:18:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.209 21:18:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.209 ************************************ 00:04:36.209 START TEST env_mem_callbacks 00:04:36.209 ************************************ 00:04:36.209 21:18:09 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.209 EAL: Detected CPU lcores: 10 00:04:36.210 EAL: Detected NUMA nodes: 1 00:04:36.210 EAL: Detected shared linkage of DPDK 00:04:36.210 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.210 EAL: Selected IOVA mode 'PA' 00:04:36.210 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.210 00:04:36.210 00:04:36.210 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.210 http://cunit.sourceforge.net/ 00:04:36.210 00:04:36.210 00:04:36.210 Suite: memory 00:04:36.210 Test: test ... 00:04:36.210 register 0x200000200000 2097152 00:04:36.210 malloc 3145728 00:04:36.210 register 0x200000400000 4194304 00:04:36.210 buf 0x200000500000 len 3145728 PASSED 00:04:36.210 malloc 64 00:04:36.210 buf 0x2000004fff40 len 64 PASSED 00:04:36.210 malloc 4194304 00:04:36.210 register 0x200000800000 6291456 00:04:36.210 buf 0x200000a00000 len 4194304 PASSED 00:04:36.210 free 0x200000500000 3145728 00:04:36.210 free 0x2000004fff40 64 00:04:36.210 unregister 0x200000400000 4194304 PASSED 00:04:36.210 free 0x200000a00000 4194304 00:04:36.210 unregister 0x200000800000 6291456 PASSED 00:04:36.210 malloc 8388608 00:04:36.210 register 0x200000400000 10485760 00:04:36.210 buf 0x200000600000 len 8388608 PASSED 00:04:36.210 free 0x200000600000 8388608 00:04:36.210 unregister 0x200000400000 10485760 PASSED 00:04:36.210 passed 00:04:36.210 00:04:36.210 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.210 suites 1 1 n/a 0 0 00:04:36.210 tests 1 1 1 0 0 00:04:36.210 asserts 15 15 15 0 n/a 00:04:36.210 00:04:36.210 Elapsed time = 0.009 seconds 00:04:36.210 00:04:36.210 real 0m0.152s 00:04:36.210 user 0m0.017s 00:04:36.210 sys 0m0.035s 00:04:36.210 21:18:09 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.210 21:18:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.210 ************************************ 00:04:36.210 END TEST env_mem_callbacks 00:04:36.210 ************************************ 00:04:36.210 21:18:09 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.210 00:04:36.210 real 0m2.193s 00:04:36.210 user 0m1.019s 00:04:36.210 sys 0m0.841s 00:04:36.210 21:18:09 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.210 21:18:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.210 ************************************ 00:04:36.210 END TEST env 00:04:36.210 ************************************ 00:04:36.469 21:18:09 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.469 21:18:09 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:36.469 21:18:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.469 21:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.469 21:18:09 -- common/autotest_common.sh@10 -- # set +x 00:04:36.469 ************************************ 00:04:36.469 START TEST rpc 00:04:36.469 ************************************ 00:04:36.469 21:18:09 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:36.469 * Looking for test storage... 00:04:36.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:36.469 21:18:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58754 00:04:36.469 21:18:09 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:36.469 21:18:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.469 21:18:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58754 00:04:36.469 21:18:09 rpc -- common/autotest_common.sh@829 -- # '[' -z 58754 ']' 00:04:36.469 21:18:09 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.469 21:18:09 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.469 21:18:09 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.469 21:18:09 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.469 21:18:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.469 [2024-07-15 21:18:09.804237] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:04:36.469 [2024-07-15 21:18:09.804304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58754 ] 00:04:36.728 [2024-07-15 21:18:09.945333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.729 [2024-07-15 21:18:10.040020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.729 [2024-07-15 21:18:10.040077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58754' to capture a snapshot of events at runtime. 00:04:36.729 [2024-07-15 21:18:10.040086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.729 [2024-07-15 21:18:10.040094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.729 [2024-07-15 21:18:10.040102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58754 for offline analysis/debug. 00:04:36.729 [2024-07-15 21:18:10.040133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.729 [2024-07-15 21:18:10.083540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.296 21:18:10 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.296 21:18:10 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:37.296 21:18:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.296 21:18:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.296 21:18:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.296 21:18:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.296 21:18:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.296 21:18:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.296 21:18:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.296 ************************************ 00:04:37.296 START TEST rpc_integrity 00:04:37.296 ************************************ 00:04:37.296 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:37.296 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.296 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.296 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.296 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.296 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.556 { 00:04:37.556 "name": "Malloc0", 00:04:37.556 "aliases": [ 00:04:37.556 "494ab1bc-e8b3-4c4b-bd48-64a5fcf73e13" 00:04:37.556 ], 00:04:37.556 "product_name": "Malloc disk", 00:04:37.556 "block_size": 512, 00:04:37.556 "num_blocks": 16384, 00:04:37.556 "uuid": "494ab1bc-e8b3-4c4b-bd48-64a5fcf73e13", 00:04:37.556 "assigned_rate_limits": { 00:04:37.556 "rw_ios_per_sec": 0, 00:04:37.556 "rw_mbytes_per_sec": 0, 00:04:37.556 "r_mbytes_per_sec": 0, 00:04:37.556 "w_mbytes_per_sec": 0 00:04:37.556 }, 00:04:37.556 "claimed": false, 00:04:37.556 "zoned": false, 00:04:37.556 "supported_io_types": { 00:04:37.556 "read": true, 00:04:37.556 "write": true, 00:04:37.556 "unmap": true, 00:04:37.556 "flush": true, 00:04:37.556 "reset": true, 00:04:37.556 "nvme_admin": false, 00:04:37.556 "nvme_io": false, 00:04:37.556 "nvme_io_md": false, 00:04:37.556 "write_zeroes": true, 00:04:37.556 "zcopy": true, 00:04:37.556 "get_zone_info": false, 00:04:37.556 "zone_management": false, 00:04:37.556 "zone_append": false, 00:04:37.556 "compare": false, 00:04:37.556 "compare_and_write": false, 00:04:37.556 "abort": true, 00:04:37.556 "seek_hole": false, 00:04:37.556 "seek_data": false, 00:04:37.556 "copy": true, 00:04:37.556 "nvme_iov_md": false 00:04:37.556 }, 00:04:37.556 "memory_domains": [ 00:04:37.556 { 00:04:37.556 "dma_device_id": "system", 00:04:37.556 "dma_device_type": 1 00:04:37.556 }, 00:04:37.556 { 00:04:37.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.556 "dma_device_type": 2 00:04:37.556 } 00:04:37.556 ], 00:04:37.556 "driver_specific": {} 00:04:37.556 } 00:04:37.556 ]' 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.556 [2024-07-15 21:18:10.790908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.556 [2024-07-15 21:18:10.790955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.556 [2024-07-15 21:18:10.790974] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1296da0 00:04:37.556 [2024-07-15 21:18:10.790983] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.556 [2024-07-15 21:18:10.792348] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.556 [2024-07-15 21:18:10.792380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.556 Passthru0 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.556 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.556 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.556 { 00:04:37.556 "name": "Malloc0", 00:04:37.556 "aliases": [ 00:04:37.556 "494ab1bc-e8b3-4c4b-bd48-64a5fcf73e13" 00:04:37.556 ], 00:04:37.556 "product_name": "Malloc disk", 00:04:37.556 "block_size": 512, 00:04:37.556 "num_blocks": 16384, 00:04:37.556 "uuid": "494ab1bc-e8b3-4c4b-bd48-64a5fcf73e13", 00:04:37.556 "assigned_rate_limits": { 00:04:37.556 "rw_ios_per_sec": 0, 00:04:37.556 "rw_mbytes_per_sec": 0, 00:04:37.556 "r_mbytes_per_sec": 0, 00:04:37.556 "w_mbytes_per_sec": 0 00:04:37.556 }, 00:04:37.556 "claimed": true, 00:04:37.556 "claim_type": "exclusive_write", 00:04:37.556 "zoned": false, 00:04:37.556 "supported_io_types": { 00:04:37.556 "read": true, 00:04:37.556 "write": true, 00:04:37.556 "unmap": true, 00:04:37.556 "flush": true, 00:04:37.556 "reset": true, 00:04:37.556 "nvme_admin": false, 00:04:37.556 "nvme_io": false, 00:04:37.556 "nvme_io_md": false, 00:04:37.556 "write_zeroes": true, 00:04:37.556 "zcopy": true, 00:04:37.556 "get_zone_info": false, 00:04:37.556 "zone_management": false, 00:04:37.556 "zone_append": false, 00:04:37.556 "compare": false, 00:04:37.556 "compare_and_write": false, 00:04:37.556 "abort": true, 00:04:37.556 "seek_hole": false, 00:04:37.556 "seek_data": false, 00:04:37.556 "copy": true, 00:04:37.556 "nvme_iov_md": false 00:04:37.556 }, 00:04:37.556 "memory_domains": [ 00:04:37.556 { 00:04:37.556 "dma_device_id": "system", 00:04:37.556 "dma_device_type": 1 00:04:37.556 }, 00:04:37.556 { 00:04:37.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.556 "dma_device_type": 2 00:04:37.556 } 00:04:37.556 ], 00:04:37.556 "driver_specific": {} 00:04:37.556 }, 00:04:37.556 { 00:04:37.556 "name": "Passthru0", 00:04:37.556 "aliases": [ 00:04:37.556 "1c08ffe3-e97f-5f6f-8b77-026a4239684a" 00:04:37.556 ], 00:04:37.556 "product_name": "passthru", 00:04:37.556 "block_size": 512, 00:04:37.556 "num_blocks": 16384, 00:04:37.556 "uuid": "1c08ffe3-e97f-5f6f-8b77-026a4239684a", 00:04:37.556 "assigned_rate_limits": { 00:04:37.556 "rw_ios_per_sec": 0, 00:04:37.556 "rw_mbytes_per_sec": 0, 00:04:37.556 "r_mbytes_per_sec": 0, 00:04:37.556 "w_mbytes_per_sec": 0 00:04:37.556 }, 00:04:37.556 "claimed": false, 00:04:37.556 "zoned": false, 00:04:37.556 "supported_io_types": { 00:04:37.556 "read": true, 00:04:37.556 "write": true, 00:04:37.556 "unmap": true, 00:04:37.556 "flush": true, 00:04:37.556 "reset": true, 00:04:37.556 "nvme_admin": false, 00:04:37.556 "nvme_io": false, 00:04:37.556 "nvme_io_md": false, 00:04:37.556 "write_zeroes": true, 00:04:37.557 "zcopy": true, 00:04:37.557 "get_zone_info": false, 00:04:37.557 "zone_management": false, 00:04:37.557 "zone_append": false, 00:04:37.557 "compare": false, 00:04:37.557 "compare_and_write": false, 00:04:37.557 "abort": true, 00:04:37.557 "seek_hole": false, 00:04:37.557 "seek_data": false, 00:04:37.557 "copy": true, 00:04:37.557 "nvme_iov_md": false 00:04:37.557 }, 00:04:37.557 "memory_domains": [ 00:04:37.557 { 00:04:37.557 "dma_device_id": "system", 00:04:37.557 "dma_device_type": 1 00:04:37.557 }, 00:04:37.557 { 00:04:37.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.557 "dma_device_type": 2 00:04:37.557 } 00:04:37.557 ], 00:04:37.557 "driver_specific": { 00:04:37.557 "passthru": { 00:04:37.557 "name": "Passthru0", 00:04:37.557 "base_bdev_name": "Malloc0" 00:04:37.557 } 00:04:37.557 } 00:04:37.557 } 00:04:37.557 ]' 00:04:37.557 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.557 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.557 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.557 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.557 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.557 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.557 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.557 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.816 21:18:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.816 00:04:37.816 real 0m0.290s 00:04:37.816 user 0m0.176s 00:04:37.816 sys 0m0.045s 00:04:37.816 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.816 ************************************ 00:04:37.816 END TEST rpc_integrity 00:04:37.816 ************************************ 00:04:37.816 21:18:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.816 21:18:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:37.816 21:18:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.816 21:18:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.816 21:18:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.816 21:18:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.816 ************************************ 00:04:37.816 START TEST rpc_plugins 00:04:37.816 ************************************ 00:04:37.816 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:37.816 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.816 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.816 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.816 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.816 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.817 { 00:04:37.817 "name": "Malloc1", 00:04:37.817 "aliases": [ 00:04:37.817 "7a7a5cd2-5687-4f94-95fa-2a5868f544af" 00:04:37.817 ], 00:04:37.817 "product_name": "Malloc disk", 00:04:37.817 "block_size": 4096, 00:04:37.817 "num_blocks": 256, 00:04:37.817 "uuid": "7a7a5cd2-5687-4f94-95fa-2a5868f544af", 00:04:37.817 "assigned_rate_limits": { 00:04:37.817 "rw_ios_per_sec": 0, 00:04:37.817 "rw_mbytes_per_sec": 0, 00:04:37.817 "r_mbytes_per_sec": 0, 00:04:37.817 "w_mbytes_per_sec": 0 00:04:37.817 }, 00:04:37.817 "claimed": false, 00:04:37.817 "zoned": false, 00:04:37.817 "supported_io_types": { 00:04:37.817 "read": true, 00:04:37.817 "write": true, 00:04:37.817 "unmap": true, 00:04:37.817 "flush": true, 00:04:37.817 "reset": true, 00:04:37.817 "nvme_admin": false, 00:04:37.817 "nvme_io": false, 00:04:37.817 "nvme_io_md": false, 00:04:37.817 "write_zeroes": true, 00:04:37.817 "zcopy": true, 00:04:37.817 "get_zone_info": false, 00:04:37.817 "zone_management": false, 00:04:37.817 "zone_append": false, 00:04:37.817 "compare": false, 00:04:37.817 "compare_and_write": false, 00:04:37.817 "abort": true, 00:04:37.817 "seek_hole": false, 00:04:37.817 "seek_data": false, 00:04:37.817 "copy": true, 00:04:37.817 "nvme_iov_md": false 00:04:37.817 }, 00:04:37.817 "memory_domains": [ 00:04:37.817 { 00:04:37.817 "dma_device_id": "system", 00:04:37.817 "dma_device_type": 1 00:04:37.817 }, 00:04:37.817 { 00:04:37.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.817 "dma_device_type": 2 00:04:37.817 } 00:04:37.817 ], 00:04:37.817 "driver_specific": {} 00:04:37.817 } 00:04:37.817 ]' 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.817 21:18:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.817 00:04:37.817 real 0m0.159s 00:04:37.817 user 0m0.097s 00:04:37.817 sys 0m0.027s 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.817 ************************************ 00:04:37.817 END TEST rpc_plugins 00:04:37.817 21:18:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.817 ************************************ 00:04:38.076 21:18:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.076 21:18:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.076 21:18:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.076 21:18:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.076 21:18:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.076 ************************************ 00:04:38.076 START TEST rpc_trace_cmd_test 00:04:38.076 ************************************ 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.076 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58754", 00:04:38.076 "tpoint_group_mask": "0x8", 00:04:38.076 "iscsi_conn": { 00:04:38.076 "mask": "0x2", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "scsi": { 00:04:38.076 "mask": "0x4", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "bdev": { 00:04:38.076 "mask": "0x8", 00:04:38.076 "tpoint_mask": "0xffffffffffffffff" 00:04:38.076 }, 00:04:38.076 "nvmf_rdma": { 00:04:38.076 "mask": "0x10", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "nvmf_tcp": { 00:04:38.076 "mask": "0x20", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "ftl": { 00:04:38.076 "mask": "0x40", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "blobfs": { 00:04:38.076 "mask": "0x80", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "dsa": { 00:04:38.076 "mask": "0x200", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "thread": { 00:04:38.076 "mask": "0x400", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "nvme_pcie": { 00:04:38.076 "mask": "0x800", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "iaa": { 00:04:38.076 "mask": "0x1000", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "nvme_tcp": { 00:04:38.076 "mask": "0x2000", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "bdev_nvme": { 00:04:38.076 "mask": "0x4000", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 }, 00:04:38.076 "sock": { 00:04:38.076 "mask": "0x8000", 00:04:38.076 "tpoint_mask": "0x0" 00:04:38.076 } 00:04:38.076 }' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.076 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.335 21:18:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.335 00:04:38.335 real 0m0.229s 00:04:38.335 user 0m0.180s 00:04:38.335 sys 0m0.039s 00:04:38.335 21:18:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.335 ************************************ 00:04:38.335 END TEST rpc_trace_cmd_test 00:04:38.335 21:18:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.335 ************************************ 00:04:38.335 21:18:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.335 21:18:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.335 21:18:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.335 21:18:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.335 21:18:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.335 21:18:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.335 21:18:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.335 ************************************ 00:04:38.335 START TEST rpc_daemon_integrity 00:04:38.335 ************************************ 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.335 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.335 { 00:04:38.335 "name": "Malloc2", 00:04:38.335 "aliases": [ 00:04:38.335 "b14d7ef2-152a-4067-80b2-eba35bc7f8b9" 00:04:38.335 ], 00:04:38.335 "product_name": "Malloc disk", 00:04:38.335 "block_size": 512, 00:04:38.335 "num_blocks": 16384, 00:04:38.335 "uuid": "b14d7ef2-152a-4067-80b2-eba35bc7f8b9", 00:04:38.335 "assigned_rate_limits": { 00:04:38.335 "rw_ios_per_sec": 0, 00:04:38.335 "rw_mbytes_per_sec": 0, 00:04:38.335 "r_mbytes_per_sec": 0, 00:04:38.335 "w_mbytes_per_sec": 0 00:04:38.335 }, 00:04:38.335 "claimed": false, 00:04:38.335 "zoned": false, 00:04:38.335 "supported_io_types": { 00:04:38.335 "read": true, 00:04:38.335 "write": true, 00:04:38.335 "unmap": true, 00:04:38.335 "flush": true, 00:04:38.335 "reset": true, 00:04:38.335 "nvme_admin": false, 00:04:38.335 "nvme_io": false, 00:04:38.335 "nvme_io_md": false, 00:04:38.335 "write_zeroes": true, 00:04:38.335 "zcopy": true, 00:04:38.335 "get_zone_info": false, 00:04:38.335 "zone_management": false, 00:04:38.335 "zone_append": false, 00:04:38.335 "compare": false, 00:04:38.335 "compare_and_write": false, 00:04:38.335 "abort": true, 00:04:38.335 "seek_hole": false, 00:04:38.335 "seek_data": false, 00:04:38.335 "copy": true, 00:04:38.335 "nvme_iov_md": false 00:04:38.335 }, 00:04:38.335 "memory_domains": [ 00:04:38.335 { 00:04:38.335 "dma_device_id": "system", 00:04:38.335 "dma_device_type": 1 00:04:38.335 }, 00:04:38.335 { 00:04:38.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.335 "dma_device_type": 2 00:04:38.335 } 00:04:38.335 ], 00:04:38.335 "driver_specific": {} 00:04:38.335 } 00:04:38.335 ]' 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.336 [2024-07-15 21:18:11.661996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.336 [2024-07-15 21:18:11.662045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.336 [2024-07-15 21:18:11.662066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12fbbe0 00:04:38.336 [2024-07-15 21:18:11.662074] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.336 [2024-07-15 21:18:11.663307] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.336 [2024-07-15 21:18:11.663335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.336 Passthru0 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.336 { 00:04:38.336 "name": "Malloc2", 00:04:38.336 "aliases": [ 00:04:38.336 "b14d7ef2-152a-4067-80b2-eba35bc7f8b9" 00:04:38.336 ], 00:04:38.336 "product_name": "Malloc disk", 00:04:38.336 "block_size": 512, 00:04:38.336 "num_blocks": 16384, 00:04:38.336 "uuid": "b14d7ef2-152a-4067-80b2-eba35bc7f8b9", 00:04:38.336 "assigned_rate_limits": { 00:04:38.336 "rw_ios_per_sec": 0, 00:04:38.336 "rw_mbytes_per_sec": 0, 00:04:38.336 "r_mbytes_per_sec": 0, 00:04:38.336 "w_mbytes_per_sec": 0 00:04:38.336 }, 00:04:38.336 "claimed": true, 00:04:38.336 "claim_type": "exclusive_write", 00:04:38.336 "zoned": false, 00:04:38.336 "supported_io_types": { 00:04:38.336 "read": true, 00:04:38.336 "write": true, 00:04:38.336 "unmap": true, 00:04:38.336 "flush": true, 00:04:38.336 "reset": true, 00:04:38.336 "nvme_admin": false, 00:04:38.336 "nvme_io": false, 00:04:38.336 "nvme_io_md": false, 00:04:38.336 "write_zeroes": true, 00:04:38.336 "zcopy": true, 00:04:38.336 "get_zone_info": false, 00:04:38.336 "zone_management": false, 00:04:38.336 "zone_append": false, 00:04:38.336 "compare": false, 00:04:38.336 "compare_and_write": false, 00:04:38.336 "abort": true, 00:04:38.336 "seek_hole": false, 00:04:38.336 "seek_data": false, 00:04:38.336 "copy": true, 00:04:38.336 "nvme_iov_md": false 00:04:38.336 }, 00:04:38.336 "memory_domains": [ 00:04:38.336 { 00:04:38.336 "dma_device_id": "system", 00:04:38.336 "dma_device_type": 1 00:04:38.336 }, 00:04:38.336 { 00:04:38.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.336 "dma_device_type": 2 00:04:38.336 } 00:04:38.336 ], 00:04:38.336 "driver_specific": {} 00:04:38.336 }, 00:04:38.336 { 00:04:38.336 "name": "Passthru0", 00:04:38.336 "aliases": [ 00:04:38.336 "1af7c12e-cffb-594c-b824-f1cd0cec645c" 00:04:38.336 ], 00:04:38.336 "product_name": "passthru", 00:04:38.336 "block_size": 512, 00:04:38.336 "num_blocks": 16384, 00:04:38.336 "uuid": "1af7c12e-cffb-594c-b824-f1cd0cec645c", 00:04:38.336 "assigned_rate_limits": { 00:04:38.336 "rw_ios_per_sec": 0, 00:04:38.336 "rw_mbytes_per_sec": 0, 00:04:38.336 "r_mbytes_per_sec": 0, 00:04:38.336 "w_mbytes_per_sec": 0 00:04:38.336 }, 00:04:38.336 "claimed": false, 00:04:38.336 "zoned": false, 00:04:38.336 "supported_io_types": { 00:04:38.336 "read": true, 00:04:38.336 "write": true, 00:04:38.336 "unmap": true, 00:04:38.336 "flush": true, 00:04:38.336 "reset": true, 00:04:38.336 "nvme_admin": false, 00:04:38.336 "nvme_io": false, 00:04:38.336 "nvme_io_md": false, 00:04:38.336 "write_zeroes": true, 00:04:38.336 "zcopy": true, 00:04:38.336 "get_zone_info": false, 00:04:38.336 "zone_management": false, 00:04:38.336 "zone_append": false, 00:04:38.336 "compare": false, 00:04:38.336 "compare_and_write": false, 00:04:38.336 "abort": true, 00:04:38.336 "seek_hole": false, 00:04:38.336 "seek_data": false, 00:04:38.336 "copy": true, 00:04:38.336 "nvme_iov_md": false 00:04:38.336 }, 00:04:38.336 "memory_domains": [ 00:04:38.336 { 00:04:38.336 "dma_device_id": "system", 00:04:38.336 "dma_device_type": 1 00:04:38.336 }, 00:04:38.336 { 00:04:38.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.336 "dma_device_type": 2 00:04:38.336 } 00:04:38.336 ], 00:04:38.336 "driver_specific": { 00:04:38.336 "passthru": { 00:04:38.336 "name": "Passthru0", 00:04:38.336 "base_bdev_name": "Malloc2" 00:04:38.336 } 00:04:38.336 } 00:04:38.336 } 00:04:38.336 ]' 00:04:38.336 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.595 00:04:38.595 real 0m0.282s 00:04:38.595 user 0m0.169s 00:04:38.595 sys 0m0.049s 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.595 21:18:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.595 ************************************ 00:04:38.595 END TEST rpc_daemon_integrity 00:04:38.595 ************************************ 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.595 21:18:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.595 21:18:11 rpc -- rpc/rpc.sh@84 -- # killprocess 58754 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@948 -- # '[' -z 58754 ']' 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@952 -- # kill -0 58754 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@953 -- # uname 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58754 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.595 killing process with pid 58754 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58754' 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@967 -- # kill 58754 00:04:38.595 21:18:11 rpc -- common/autotest_common.sh@972 -- # wait 58754 00:04:38.853 00:04:38.853 real 0m2.601s 00:04:38.853 user 0m3.215s 00:04:38.853 sys 0m0.715s 00:04:38.853 21:18:12 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.853 21:18:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.853 ************************************ 00:04:38.853 END TEST rpc 00:04:38.853 ************************************ 00:04:39.111 21:18:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.111 21:18:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:39.111 21:18:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.111 21:18:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.111 21:18:12 -- common/autotest_common.sh@10 -- # set +x 00:04:39.111 ************************************ 00:04:39.111 START TEST skip_rpc 00:04:39.111 ************************************ 00:04:39.111 21:18:12 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:39.111 * Looking for test storage... 00:04:39.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.111 21:18:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.111 21:18:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.111 21:18:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:39.111 21:18:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.111 21:18:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.111 21:18:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.111 ************************************ 00:04:39.111 START TEST skip_rpc 00:04:39.111 ************************************ 00:04:39.111 21:18:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:39.111 21:18:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58947 00:04:39.111 21:18:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.111 21:18:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:39.111 21:18:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:39.111 [2024-07-15 21:18:12.482000] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:04:39.370 [2024-07-15 21:18:12.482189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58947 ] 00:04:39.370 [2024-07-15 21:18:12.624339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.370 [2024-07-15 21:18:12.732113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.629 [2024-07-15 21:18:12.789382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58947 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58947 ']' 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58947 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.899 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58947 00:04:44.899 killing process with pid 58947 00:04:44.900 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.900 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.900 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58947' 00:04:44.900 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58947 00:04:44.900 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58947 00:04:44.900 00:04:44.900 real 0m5.365s 00:04:44.900 user 0m5.004s 00:04:44.900 sys 0m0.271s 00:04:44.900 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.900 21:18:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.900 ************************************ 00:04:44.900 END TEST skip_rpc 00:04:44.900 ************************************ 00:04:44.900 21:18:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.900 21:18:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.900 21:18:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.900 21:18:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.900 21:18:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.900 ************************************ 00:04:44.900 START TEST skip_rpc_with_json 00:04:44.900 ************************************ 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59033 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59033 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59033 ']' 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.900 21:18:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.900 [2024-07-15 21:18:17.904597] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:04:44.900 [2024-07-15 21:18:17.904671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59033 ] 00:04:44.900 [2024-07-15 21:18:18.045566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.900 [2024-07-15 21:18:18.142002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.900 [2024-07-15 21:18:18.184563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.469 [2024-07-15 21:18:18.733929] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.469 request: 00:04:45.469 { 00:04:45.469 "trtype": "tcp", 00:04:45.469 "method": "nvmf_get_transports", 00:04:45.469 "req_id": 1 00:04:45.469 } 00:04:45.469 Got JSON-RPC error response 00:04:45.469 response: 00:04:45.469 { 00:04:45.469 "code": -19, 00:04:45.469 "message": "No such device" 00:04:45.469 } 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.469 [2024-07-15 21:18:18.745983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.469 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.728 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.728 21:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.728 { 00:04:45.728 "subsystems": [ 00:04:45.728 { 00:04:45.728 "subsystem": "keyring", 00:04:45.728 "config": [] 00:04:45.728 }, 00:04:45.728 { 00:04:45.728 "subsystem": "iobuf", 00:04:45.728 "config": [ 00:04:45.728 { 00:04:45.728 "method": "iobuf_set_options", 00:04:45.728 "params": { 00:04:45.728 "small_pool_count": 8192, 00:04:45.728 "large_pool_count": 1024, 00:04:45.728 "small_bufsize": 8192, 00:04:45.728 "large_bufsize": 135168 00:04:45.728 } 00:04:45.728 } 00:04:45.728 ] 00:04:45.728 }, 00:04:45.728 { 00:04:45.728 "subsystem": "sock", 00:04:45.728 "config": [ 00:04:45.728 { 00:04:45.728 "method": "sock_set_default_impl", 00:04:45.728 "params": { 00:04:45.728 "impl_name": "uring" 00:04:45.728 } 00:04:45.728 }, 00:04:45.728 { 00:04:45.728 "method": "sock_impl_set_options", 00:04:45.728 "params": { 00:04:45.728 "impl_name": "ssl", 00:04:45.728 "recv_buf_size": 4096, 00:04:45.728 "send_buf_size": 4096, 00:04:45.728 "enable_recv_pipe": true, 00:04:45.728 "enable_quickack": false, 00:04:45.728 "enable_placement_id": 0, 00:04:45.728 "enable_zerocopy_send_server": true, 00:04:45.728 "enable_zerocopy_send_client": false, 00:04:45.728 "zerocopy_threshold": 0, 00:04:45.728 "tls_version": 0, 00:04:45.728 "enable_ktls": false 00:04:45.728 } 00:04:45.728 }, 00:04:45.728 { 00:04:45.728 "method": "sock_impl_set_options", 00:04:45.728 "params": { 00:04:45.728 "impl_name": "posix", 00:04:45.728 "recv_buf_size": 2097152, 00:04:45.728 "send_buf_size": 2097152, 00:04:45.728 "enable_recv_pipe": true, 00:04:45.728 "enable_quickack": false, 00:04:45.728 "enable_placement_id": 0, 00:04:45.728 "enable_zerocopy_send_server": true, 00:04:45.728 "enable_zerocopy_send_client": false, 00:04:45.728 "zerocopy_threshold": 0, 00:04:45.728 "tls_version": 0, 00:04:45.728 "enable_ktls": false 00:04:45.728 } 00:04:45.728 }, 00:04:45.728 { 00:04:45.728 "method": "sock_impl_set_options", 00:04:45.728 "params": { 00:04:45.728 "impl_name": "uring", 00:04:45.728 "recv_buf_size": 2097152, 00:04:45.728 "send_buf_size": 2097152, 00:04:45.728 "enable_recv_pipe": true, 00:04:45.728 "enable_quickack": false, 00:04:45.728 "enable_placement_id": 0, 00:04:45.728 "enable_zerocopy_send_server": false, 00:04:45.728 "enable_zerocopy_send_client": false, 00:04:45.728 "zerocopy_threshold": 0, 00:04:45.728 "tls_version": 0, 00:04:45.728 "enable_ktls": false 00:04:45.728 } 00:04:45.728 } 00:04:45.728 ] 00:04:45.728 }, 00:04:45.728 { 00:04:45.728 "subsystem": "vmd", 00:04:45.728 "config": [] 00:04:45.728 }, 00:04:45.728 { 00:04:45.728 "subsystem": "accel", 00:04:45.728 "config": [ 00:04:45.728 { 00:04:45.728 "method": "accel_set_options", 00:04:45.728 "params": { 00:04:45.728 "small_cache_size": 128, 00:04:45.728 "large_cache_size": 16, 00:04:45.728 "task_count": 2048, 00:04:45.728 "sequence_count": 2048, 00:04:45.728 "buf_count": 2048 00:04:45.728 } 00:04:45.728 } 00:04:45.729 ] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "bdev", 00:04:45.729 "config": [ 00:04:45.729 { 00:04:45.729 "method": "bdev_set_options", 00:04:45.729 "params": { 00:04:45.729 "bdev_io_pool_size": 65535, 00:04:45.729 "bdev_io_cache_size": 256, 00:04:45.729 "bdev_auto_examine": true, 00:04:45.729 "iobuf_small_cache_size": 128, 00:04:45.729 "iobuf_large_cache_size": 16 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "bdev_raid_set_options", 00:04:45.729 "params": { 00:04:45.729 "process_window_size_kb": 1024 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "bdev_iscsi_set_options", 00:04:45.729 "params": { 00:04:45.729 "timeout_sec": 30 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "bdev_nvme_set_options", 00:04:45.729 "params": { 00:04:45.729 "action_on_timeout": "none", 00:04:45.729 "timeout_us": 0, 00:04:45.729 "timeout_admin_us": 0, 00:04:45.729 "keep_alive_timeout_ms": 10000, 00:04:45.729 "arbitration_burst": 0, 00:04:45.729 "low_priority_weight": 0, 00:04:45.729 "medium_priority_weight": 0, 00:04:45.729 "high_priority_weight": 0, 00:04:45.729 "nvme_adminq_poll_period_us": 10000, 00:04:45.729 "nvme_ioq_poll_period_us": 0, 00:04:45.729 "io_queue_requests": 0, 00:04:45.729 "delay_cmd_submit": true, 00:04:45.729 "transport_retry_count": 4, 00:04:45.729 "bdev_retry_count": 3, 00:04:45.729 "transport_ack_timeout": 0, 00:04:45.729 "ctrlr_loss_timeout_sec": 0, 00:04:45.729 "reconnect_delay_sec": 0, 00:04:45.729 "fast_io_fail_timeout_sec": 0, 00:04:45.729 "disable_auto_failback": false, 00:04:45.729 "generate_uuids": false, 00:04:45.729 "transport_tos": 0, 00:04:45.729 "nvme_error_stat": false, 00:04:45.729 "rdma_srq_size": 0, 00:04:45.729 "io_path_stat": false, 00:04:45.729 "allow_accel_sequence": false, 00:04:45.729 "rdma_max_cq_size": 0, 00:04:45.729 "rdma_cm_event_timeout_ms": 0, 00:04:45.729 "dhchap_digests": [ 00:04:45.729 "sha256", 00:04:45.729 "sha384", 00:04:45.729 "sha512" 00:04:45.729 ], 00:04:45.729 "dhchap_dhgroups": [ 00:04:45.729 "null", 00:04:45.729 "ffdhe2048", 00:04:45.729 "ffdhe3072", 00:04:45.729 "ffdhe4096", 00:04:45.729 "ffdhe6144", 00:04:45.729 "ffdhe8192" 00:04:45.729 ] 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "bdev_nvme_set_hotplug", 00:04:45.729 "params": { 00:04:45.729 "period_us": 100000, 00:04:45.729 "enable": false 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "bdev_wait_for_examine" 00:04:45.729 } 00:04:45.729 ] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "scsi", 00:04:45.729 "config": null 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "scheduler", 00:04:45.729 "config": [ 00:04:45.729 { 00:04:45.729 "method": "framework_set_scheduler", 00:04:45.729 "params": { 00:04:45.729 "name": "static" 00:04:45.729 } 00:04:45.729 } 00:04:45.729 ] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "vhost_scsi", 00:04:45.729 "config": [] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "vhost_blk", 00:04:45.729 "config": [] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "ublk", 00:04:45.729 "config": [] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "nbd", 00:04:45.729 "config": [] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "nvmf", 00:04:45.729 "config": [ 00:04:45.729 { 00:04:45.729 "method": "nvmf_set_config", 00:04:45.729 "params": { 00:04:45.729 "discovery_filter": "match_any", 00:04:45.729 "admin_cmd_passthru": { 00:04:45.729 "identify_ctrlr": false 00:04:45.729 } 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "nvmf_set_max_subsystems", 00:04:45.729 "params": { 00:04:45.729 "max_subsystems": 1024 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "nvmf_set_crdt", 00:04:45.729 "params": { 00:04:45.729 "crdt1": 0, 00:04:45.729 "crdt2": 0, 00:04:45.729 "crdt3": 0 00:04:45.729 } 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "method": "nvmf_create_transport", 00:04:45.729 "params": { 00:04:45.729 "trtype": "TCP", 00:04:45.729 "max_queue_depth": 128, 00:04:45.729 "max_io_qpairs_per_ctrlr": 127, 00:04:45.729 "in_capsule_data_size": 4096, 00:04:45.729 "max_io_size": 131072, 00:04:45.729 "io_unit_size": 131072, 00:04:45.729 "max_aq_depth": 128, 00:04:45.729 "num_shared_buffers": 511, 00:04:45.729 "buf_cache_size": 4294967295, 00:04:45.729 "dif_insert_or_strip": false, 00:04:45.729 "zcopy": false, 00:04:45.729 "c2h_success": true, 00:04:45.729 "sock_priority": 0, 00:04:45.729 "abort_timeout_sec": 1, 00:04:45.729 "ack_timeout": 0, 00:04:45.729 "data_wr_pool_size": 0 00:04:45.729 } 00:04:45.729 } 00:04:45.729 ] 00:04:45.729 }, 00:04:45.729 { 00:04:45.729 "subsystem": "iscsi", 00:04:45.729 "config": [ 00:04:45.729 { 00:04:45.729 "method": "iscsi_set_options", 00:04:45.729 "params": { 00:04:45.729 "node_base": "iqn.2016-06.io.spdk", 00:04:45.729 "max_sessions": 128, 00:04:45.729 "max_connections_per_session": 2, 00:04:45.729 "max_queue_depth": 64, 00:04:45.729 "default_time2wait": 2, 00:04:45.729 "default_time2retain": 20, 00:04:45.729 "first_burst_length": 8192, 00:04:45.729 "immediate_data": true, 00:04:45.729 "allow_duplicated_isid": false, 00:04:45.729 "error_recovery_level": 0, 00:04:45.729 "nop_timeout": 60, 00:04:45.729 "nop_in_interval": 30, 00:04:45.729 "disable_chap": false, 00:04:45.729 "require_chap": false, 00:04:45.729 "mutual_chap": false, 00:04:45.729 "chap_group": 0, 00:04:45.729 "max_large_datain_per_connection": 64, 00:04:45.729 "max_r2t_per_connection": 4, 00:04:45.729 "pdu_pool_size": 36864, 00:04:45.729 "immediate_data_pool_size": 16384, 00:04:45.729 "data_out_pool_size": 2048 00:04:45.729 } 00:04:45.729 } 00:04:45.729 ] 00:04:45.729 } 00:04:45.729 ] 00:04:45.729 } 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59033 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59033 ']' 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59033 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59033 00:04:45.729 killing process with pid 59033 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59033' 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59033 00:04:45.729 21:18:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59033 00:04:45.988 21:18:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59055 00:04:45.988 21:18:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.988 21:18:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59055 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59055 ']' 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59055 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59055 00:04:51.259 killing process with pid 59055 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59055' 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59055 00:04:51.259 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59055 00:04:51.517 21:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:51.517 21:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:51.517 00:04:51.517 real 0m6.796s 00:04:51.517 user 0m6.482s 00:04:51.517 sys 0m0.559s 00:04:51.517 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.517 ************************************ 00:04:51.517 END TEST skip_rpc_with_json 00:04:51.517 21:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.517 ************************************ 00:04:51.517 21:18:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.517 21:18:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:51.517 21:18:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.517 21:18:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.517 21:18:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.517 ************************************ 00:04:51.517 START TEST skip_rpc_with_delay 00:04:51.517 ************************************ 00:04:51.517 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:51.517 21:18:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.517 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.518 [2024-07-15 21:18:24.773594] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:51.518 [2024-07-15 21:18:24.773691] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.518 00:04:51.518 real 0m0.075s 00:04:51.518 user 0m0.045s 00:04:51.518 sys 0m0.029s 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.518 ************************************ 00:04:51.518 END TEST skip_rpc_with_delay 00:04:51.518 ************************************ 00:04:51.518 21:18:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:51.518 21:18:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.518 21:18:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:51.518 21:18:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:51.518 21:18:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:51.518 21:18:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.518 21:18:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.518 21:18:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.518 ************************************ 00:04:51.518 START TEST exit_on_failed_rpc_init 00:04:51.518 ************************************ 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59165 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59165 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59165 ']' 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.518 21:18:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.776 [2024-07-15 21:18:24.916796] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:04:51.776 [2024-07-15 21:18:24.916878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 00:04:51.776 [2024-07-15 21:18:25.058314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.034 [2024-07-15 21:18:25.152786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.034 [2024-07-15 21:18:25.195399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:52.599 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.599 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:52.599 21:18:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:52.600 21:18:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.600 [2024-07-15 21:18:25.826377] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:04:52.600 [2024-07-15 21:18:25.826445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:04:52.600 [2024-07-15 21:18:25.969197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.886 [2024-07-15 21:18:26.063978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.886 [2024-07-15 21:18:26.064276] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.886 [2024-07-15 21:18:26.064417] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.886 [2024-07-15 21:18:26.064447] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59165 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59165 ']' 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59165 00:04:52.886 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:52.887 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.887 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59165 00:04:52.887 killing process with pid 59165 00:04:52.887 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.887 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.887 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59165' 00:04:52.887 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59165 00:04:52.887 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59165 00:04:53.145 ************************************ 00:04:53.145 END TEST exit_on_failed_rpc_init 00:04:53.145 ************************************ 00:04:53.145 00:04:53.145 real 0m1.649s 00:04:53.145 user 0m1.848s 00:04:53.145 sys 0m0.390s 00:04:53.145 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.145 21:18:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.403 21:18:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.403 21:18:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.403 ************************************ 00:04:53.403 END TEST skip_rpc 00:04:53.403 ************************************ 00:04:53.403 00:04:53.403 real 0m14.291s 00:04:53.403 user 0m13.511s 00:04:53.403 sys 0m1.527s 00:04:53.403 21:18:26 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.403 21:18:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.403 21:18:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.403 21:18:26 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.403 21:18:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.403 21:18:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.403 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:04:53.403 ************************************ 00:04:53.403 START TEST rpc_client 00:04:53.403 ************************************ 00:04:53.403 21:18:26 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:53.403 * Looking for test storage... 00:04:53.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:53.663 21:18:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:53.663 OK 00:04:53.663 21:18:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.663 ************************************ 00:04:53.663 END TEST rpc_client 00:04:53.663 ************************************ 00:04:53.663 00:04:53.663 real 0m0.159s 00:04:53.663 user 0m0.070s 00:04:53.663 sys 0m0.100s 00:04:53.663 21:18:26 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.663 21:18:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.663 21:18:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.663 21:18:26 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.663 21:18:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.663 21:18:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.663 21:18:26 -- common/autotest_common.sh@10 -- # set +x 00:04:53.663 ************************************ 00:04:53.663 START TEST json_config 00:04:53.663 ************************************ 00:04:53.663 21:18:26 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.663 21:18:26 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.663 21:18:26 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.663 21:18:26 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.663 21:18:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.663 21:18:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.663 21:18:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.663 21:18:26 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.663 21:18:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@47 -- # : 0 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:53.663 21:18:26 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:53.663 INFO: JSON configuration test init 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:53.663 21:18:26 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:53.663 21:18:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.663 21:18:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.663 21:18:27 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.663 Waiting for target to run... 00:04:53.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.663 21:18:27 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:53.663 21:18:27 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.663 21:18:27 json_config -- json_config/common.sh@10 -- # shift 00:04:53.663 21:18:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.663 21:18:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.663 21:18:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.663 21:18:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.663 21:18:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.663 21:18:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59301 00:04:53.663 21:18:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.663 21:18:27 json_config -- json_config/common.sh@25 -- # waitforlisten 59301 /var/tmp/spdk_tgt.sock 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@829 -- # '[' -z 59301 ']' 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.663 21:18:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.663 21:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.921 [2024-07-15 21:18:27.069359] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:04:53.921 [2024-07-15 21:18:27.069625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 00:04:54.177 [2024-07-15 21:18:27.434188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.177 [2024-07-15 21:18:27.513105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.744 21:18:27 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.744 21:18:27 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:54.744 21:18:27 json_config -- json_config/common.sh@26 -- # echo '' 00:04:54.744 00:04:54.744 21:18:27 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:54.744 21:18:27 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:54.744 21:18:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.744 21:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.744 21:18:27 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:54.744 21:18:27 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:54.744 21:18:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.744 21:18:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.744 21:18:27 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:54.744 21:18:27 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:54.744 21:18:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:55.002 [2024-07-15 21:18:28.159733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:55.002 21:18:28 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:55.002 21:18:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:55.002 21:18:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.002 21:18:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.002 21:18:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:55.002 21:18:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:55.002 21:18:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:55.002 21:18:28 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:55.002 21:18:28 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:55.002 21:18:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:55.260 21:18:28 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:55.260 21:18:28 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:55.260 21:18:28 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:55.260 21:18:28 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:55.260 21:18:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.260 21:18:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:55.519 21:18:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:55.519 21:18:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.519 21:18:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.519 MallocForNvmf0 00:04:55.519 21:18:28 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.519 21:18:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.776 MallocForNvmf1 00:04:55.776 21:18:29 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.776 21:18:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.034 [2024-07-15 21:18:29.267513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.034 21:18:29 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.034 21:18:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.294 21:18:29 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.294 21:18:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.553 21:18:29 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.553 21:18:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.553 21:18:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.553 21:18:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.811 [2024-07-15 21:18:30.094742] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.811 21:18:30 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:56.811 21:18:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.811 21:18:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.811 21:18:30 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:56.811 21:18:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.811 21:18:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.070 21:18:30 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:57.070 21:18:30 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.070 21:18:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.070 MallocBdevForConfigChangeCheck 00:04:57.070 21:18:30 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:57.070 21:18:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.070 21:18:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.328 21:18:30 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:57.328 21:18:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.586 INFO: shutting down applications... 00:04:57.586 21:18:30 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:57.586 21:18:30 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:57.586 21:18:30 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:57.586 21:18:30 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:57.586 21:18:30 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:57.844 Calling clear_iscsi_subsystem 00:04:57.844 Calling clear_nvmf_subsystem 00:04:57.844 Calling clear_nbd_subsystem 00:04:57.844 Calling clear_ublk_subsystem 00:04:57.844 Calling clear_vhost_blk_subsystem 00:04:57.844 Calling clear_vhost_scsi_subsystem 00:04:57.844 Calling clear_bdev_subsystem 00:04:57.844 21:18:31 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:57.844 21:18:31 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:57.844 21:18:31 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:57.844 21:18:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:57.844 21:18:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.844 21:18:31 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:58.123 21:18:31 json_config -- json_config/json_config.sh@345 -- # break 00:04:58.123 21:18:31 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:58.123 21:18:31 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:58.123 21:18:31 json_config -- json_config/common.sh@31 -- # local app=target 00:04:58.123 21:18:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.123 21:18:31 json_config -- json_config/common.sh@35 -- # [[ -n 59301 ]] 00:04:58.123 21:18:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59301 00:04:58.123 21:18:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.123 21:18:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.123 21:18:31 json_config -- json_config/common.sh@41 -- # kill -0 59301 00:04:58.123 21:18:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.690 21:18:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.690 21:18:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.690 21:18:31 json_config -- json_config/common.sh@41 -- # kill -0 59301 00:04:58.690 SPDK target shutdown done 00:04:58.690 INFO: relaunching applications... 00:04:58.690 21:18:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:58.690 21:18:31 json_config -- json_config/common.sh@43 -- # break 00:04:58.690 21:18:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:58.690 21:18:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:58.690 21:18:31 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:58.690 21:18:31 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.690 21:18:31 json_config -- json_config/common.sh@9 -- # local app=target 00:04:58.690 21:18:31 json_config -- json_config/common.sh@10 -- # shift 00:04:58.690 21:18:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.690 21:18:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.690 21:18:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.690 21:18:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.690 21:18:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.690 21:18:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59486 00:04:58.690 21:18:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:58.690 21:18:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.690 Waiting for target to run... 00:04:58.690 21:18:32 json_config -- json_config/common.sh@25 -- # waitforlisten 59486 /var/tmp/spdk_tgt.sock 00:04:58.690 21:18:32 json_config -- common/autotest_common.sh@829 -- # '[' -z 59486 ']' 00:04:58.691 21:18:32 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.691 21:18:32 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.691 21:18:32 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.691 21:18:32 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.691 21:18:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.949 [2024-07-15 21:18:32.076212] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:04:58.949 [2024-07-15 21:18:32.076554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ] 00:04:59.207 [2024-07-15 21:18:32.454843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.207 [2024-07-15 21:18:32.536552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.466 [2024-07-15 21:18:32.661854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:59.724 [2024-07-15 21:18:32.863881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.724 [2024-07-15 21:18:32.895878] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:59.724 00:04:59.724 INFO: Checking if target configuration is the same... 00:04:59.724 21:18:32 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.724 21:18:32 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:59.724 21:18:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:59.724 21:18:32 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:59.724 21:18:32 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:59.724 21:18:32 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:59.724 21:18:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.724 21:18:32 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:59.724 + '[' 2 -ne 2 ']' 00:04:59.724 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:59.724 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:59.724 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:59.724 +++ basename /dev/fd/62 00:04:59.724 ++ mktemp /tmp/62.XXX 00:04:59.724 + tmp_file_1=/tmp/62.J38 00:04:59.724 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:59.724 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:59.724 + tmp_file_2=/tmp/spdk_tgt_config.json.h0E 00:04:59.724 + ret=0 00:04:59.724 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:59.982 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:00.239 + diff -u /tmp/62.J38 /tmp/spdk_tgt_config.json.h0E 00:05:00.239 INFO: JSON config files are the same 00:05:00.239 + echo 'INFO: JSON config files are the same' 00:05:00.239 + rm /tmp/62.J38 /tmp/spdk_tgt_config.json.h0E 00:05:00.239 + exit 0 00:05:00.239 INFO: changing configuration and checking if this can be detected... 00:05:00.239 21:18:33 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:00.239 21:18:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:00.239 21:18:33 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:00.239 21:18:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:00.239 21:18:33 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:00.239 21:18:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.239 21:18:33 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:00.239 + '[' 2 -ne 2 ']' 00:05:00.239 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:00.239 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:00.551 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:00.551 +++ basename /dev/fd/62 00:05:00.551 ++ mktemp /tmp/62.XXX 00:05:00.551 + tmp_file_1=/tmp/62.uQi 00:05:00.551 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:00.551 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:00.551 + tmp_file_2=/tmp/spdk_tgt_config.json.Q45 00:05:00.551 + ret=0 00:05:00.551 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:00.818 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:00.818 + diff -u /tmp/62.uQi /tmp/spdk_tgt_config.json.Q45 00:05:00.818 + ret=1 00:05:00.818 + echo '=== Start of file: /tmp/62.uQi ===' 00:05:00.818 + cat /tmp/62.uQi 00:05:00.818 + echo '=== End of file: /tmp/62.uQi ===' 00:05:00.818 + echo '' 00:05:00.818 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Q45 ===' 00:05:00.818 + cat /tmp/spdk_tgt_config.json.Q45 00:05:00.818 + echo '=== End of file: /tmp/spdk_tgt_config.json.Q45 ===' 00:05:00.818 + echo '' 00:05:00.818 + rm /tmp/62.uQi /tmp/spdk_tgt_config.json.Q45 00:05:00.818 + exit 1 00:05:00.818 INFO: configuration change detected. 00:05:00.818 21:18:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@317 -- # [[ -n 59486 ]] 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.818 21:18:34 json_config -- json_config/json_config.sh@323 -- # killprocess 59486 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@948 -- # '[' -z 59486 ']' 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@952 -- # kill -0 59486 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@953 -- # uname 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59486 00:05:00.818 killing process with pid 59486 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59486' 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@967 -- # kill 59486 00:05:00.818 21:18:34 json_config -- common/autotest_common.sh@972 -- # wait 59486 00:05:01.077 21:18:34 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:01.077 21:18:34 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:01.077 21:18:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.077 21:18:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.077 21:18:34 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:01.077 INFO: Success 00:05:01.077 21:18:34 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:01.077 ************************************ 00:05:01.077 END TEST json_config 00:05:01.077 ************************************ 00:05:01.077 00:05:01.077 real 0m7.518s 00:05:01.077 user 0m10.180s 00:05:01.077 sys 0m1.857s 00:05:01.077 21:18:34 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.077 21:18:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.337 21:18:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.337 21:18:34 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:01.337 21:18:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.337 21:18:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.337 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:05:01.337 ************************************ 00:05:01.337 START TEST json_config_extra_key 00:05:01.337 ************************************ 00:05:01.337 21:18:34 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:01.337 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.337 21:18:34 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.337 21:18:34 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.338 21:18:34 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.338 21:18:34 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.338 21:18:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.338 21:18:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.338 21:18:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.338 21:18:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:01.338 21:18:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:01.338 21:18:34 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:01.338 INFO: launching applications... 00:05:01.338 Waiting for target to run... 00:05:01.338 21:18:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59626 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:01.338 21:18:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59626 /var/tmp/spdk_tgt.sock 00:05:01.338 21:18:34 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59626 ']' 00:05:01.338 21:18:34 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.338 21:18:34 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.338 21:18:34 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.338 21:18:34 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.338 21:18:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.338 [2024-07-15 21:18:34.646304] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:01.338 [2024-07-15 21:18:34.646368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:05:01.905 [2024-07-15 21:18:35.003726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.905 [2024-07-15 21:18:35.080867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.905 [2024-07-15 21:18:35.100762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.164 00:05:02.164 INFO: shutting down applications... 00:05:02.164 21:18:35 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.164 21:18:35 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:02.164 21:18:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:02.164 21:18:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59626 ]] 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59626 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59626 00:05:02.164 21:18:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.731 21:18:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.731 21:18:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.731 21:18:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59626 00:05:02.731 21:18:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.731 21:18:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:02.731 21:18:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.731 21:18:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.731 SPDK target shutdown done 00:05:02.731 Success 00:05:02.731 21:18:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:02.731 00:05:02.731 real 0m1.517s 00:05:02.731 user 0m1.225s 00:05:02.731 sys 0m0.383s 00:05:02.731 21:18:35 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.731 ************************************ 00:05:02.731 END TEST json_config_extra_key 00:05:02.731 ************************************ 00:05:02.731 21:18:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.731 21:18:36 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.731 21:18:36 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.731 21:18:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.731 21:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.731 21:18:36 -- common/autotest_common.sh@10 -- # set +x 00:05:02.731 ************************************ 00:05:02.731 START TEST alias_rpc 00:05:02.731 ************************************ 00:05:02.731 21:18:36 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.989 * Looking for test storage... 00:05:02.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:02.989 21:18:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:02.989 21:18:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.989 21:18:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59691 00:05:02.989 21:18:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59691 00:05:02.989 21:18:36 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59691 ']' 00:05:02.989 21:18:36 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.989 21:18:36 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.989 21:18:36 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.989 21:18:36 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.989 21:18:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.989 [2024-07-15 21:18:36.230694] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:02.989 [2024-07-15 21:18:36.231220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59691 ] 00:05:03.247 [2024-07-15 21:18:36.372397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.247 [2024-07-15 21:18:36.468177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.247 [2024-07-15 21:18:36.508847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.841 21:18:37 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.842 21:18:37 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:03.842 21:18:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:04.136 21:18:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59691 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59691 ']' 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59691 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59691 00:05:04.136 killing process with pid 59691 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59691' 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@967 -- # kill 59691 00:05:04.136 21:18:37 alias_rpc -- common/autotest_common.sh@972 -- # wait 59691 00:05:04.394 00:05:04.394 real 0m1.565s 00:05:04.394 user 0m1.647s 00:05:04.394 sys 0m0.403s 00:05:04.394 21:18:37 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.394 21:18:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.394 ************************************ 00:05:04.394 END TEST alias_rpc 00:05:04.394 ************************************ 00:05:04.394 21:18:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.394 21:18:37 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:04.394 21:18:37 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.394 21:18:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.394 21:18:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.394 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:05:04.394 ************************************ 00:05:04.394 START TEST spdkcli_tcp 00:05:04.394 ************************************ 00:05:04.394 21:18:37 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.652 * Looking for test storage... 00:05:04.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59761 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:04.653 21:18:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59761 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59761 ']' 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.653 21:18:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.653 [2024-07-15 21:18:37.888263] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:04.653 [2024-07-15 21:18:37.888332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59761 ] 00:05:04.911 [2024-07-15 21:18:38.030068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.911 [2024-07-15 21:18:38.127939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.911 [2024-07-15 21:18:38.127939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.911 [2024-07-15 21:18:38.169134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:05.478 21:18:38 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.478 21:18:38 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:05.478 21:18:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59773 00:05:05.478 21:18:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:05.478 21:18:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:05.737 [ 00:05:05.737 "bdev_malloc_delete", 00:05:05.737 "bdev_malloc_create", 00:05:05.737 "bdev_null_resize", 00:05:05.737 "bdev_null_delete", 00:05:05.737 "bdev_null_create", 00:05:05.737 "bdev_nvme_cuse_unregister", 00:05:05.737 "bdev_nvme_cuse_register", 00:05:05.737 "bdev_opal_new_user", 00:05:05.737 "bdev_opal_set_lock_state", 00:05:05.737 "bdev_opal_delete", 00:05:05.737 "bdev_opal_get_info", 00:05:05.737 "bdev_opal_create", 00:05:05.737 "bdev_nvme_opal_revert", 00:05:05.737 "bdev_nvme_opal_init", 00:05:05.737 "bdev_nvme_send_cmd", 00:05:05.737 "bdev_nvme_get_path_iostat", 00:05:05.737 "bdev_nvme_get_mdns_discovery_info", 00:05:05.737 "bdev_nvme_stop_mdns_discovery", 00:05:05.737 "bdev_nvme_start_mdns_discovery", 00:05:05.737 "bdev_nvme_set_multipath_policy", 00:05:05.737 "bdev_nvme_set_preferred_path", 00:05:05.737 "bdev_nvme_get_io_paths", 00:05:05.737 "bdev_nvme_remove_error_injection", 00:05:05.737 "bdev_nvme_add_error_injection", 00:05:05.737 "bdev_nvme_get_discovery_info", 00:05:05.737 "bdev_nvme_stop_discovery", 00:05:05.737 "bdev_nvme_start_discovery", 00:05:05.737 "bdev_nvme_get_controller_health_info", 00:05:05.737 "bdev_nvme_disable_controller", 00:05:05.737 "bdev_nvme_enable_controller", 00:05:05.737 "bdev_nvme_reset_controller", 00:05:05.737 "bdev_nvme_get_transport_statistics", 00:05:05.737 "bdev_nvme_apply_firmware", 00:05:05.737 "bdev_nvme_detach_controller", 00:05:05.737 "bdev_nvme_get_controllers", 00:05:05.737 "bdev_nvme_attach_controller", 00:05:05.737 "bdev_nvme_set_hotplug", 00:05:05.737 "bdev_nvme_set_options", 00:05:05.737 "bdev_passthru_delete", 00:05:05.737 "bdev_passthru_create", 00:05:05.737 "bdev_lvol_set_parent_bdev", 00:05:05.737 "bdev_lvol_set_parent", 00:05:05.737 "bdev_lvol_check_shallow_copy", 00:05:05.737 "bdev_lvol_start_shallow_copy", 00:05:05.737 "bdev_lvol_grow_lvstore", 00:05:05.737 "bdev_lvol_get_lvols", 00:05:05.737 "bdev_lvol_get_lvstores", 00:05:05.737 "bdev_lvol_delete", 00:05:05.737 "bdev_lvol_set_read_only", 00:05:05.737 "bdev_lvol_resize", 00:05:05.737 "bdev_lvol_decouple_parent", 00:05:05.737 "bdev_lvol_inflate", 00:05:05.737 "bdev_lvol_rename", 00:05:05.737 "bdev_lvol_clone_bdev", 00:05:05.737 "bdev_lvol_clone", 00:05:05.737 "bdev_lvol_snapshot", 00:05:05.737 "bdev_lvol_create", 00:05:05.737 "bdev_lvol_delete_lvstore", 00:05:05.737 "bdev_lvol_rename_lvstore", 00:05:05.737 "bdev_lvol_create_lvstore", 00:05:05.737 "bdev_raid_set_options", 00:05:05.737 "bdev_raid_remove_base_bdev", 00:05:05.737 "bdev_raid_add_base_bdev", 00:05:05.737 "bdev_raid_delete", 00:05:05.737 "bdev_raid_create", 00:05:05.737 "bdev_raid_get_bdevs", 00:05:05.737 "bdev_error_inject_error", 00:05:05.737 "bdev_error_delete", 00:05:05.737 "bdev_error_create", 00:05:05.737 "bdev_split_delete", 00:05:05.737 "bdev_split_create", 00:05:05.737 "bdev_delay_delete", 00:05:05.737 "bdev_delay_create", 00:05:05.737 "bdev_delay_update_latency", 00:05:05.737 "bdev_zone_block_delete", 00:05:05.737 "bdev_zone_block_create", 00:05:05.737 "blobfs_create", 00:05:05.737 "blobfs_detect", 00:05:05.737 "blobfs_set_cache_size", 00:05:05.737 "bdev_aio_delete", 00:05:05.737 "bdev_aio_rescan", 00:05:05.737 "bdev_aio_create", 00:05:05.737 "bdev_ftl_set_property", 00:05:05.737 "bdev_ftl_get_properties", 00:05:05.737 "bdev_ftl_get_stats", 00:05:05.737 "bdev_ftl_unmap", 00:05:05.737 "bdev_ftl_unload", 00:05:05.737 "bdev_ftl_delete", 00:05:05.737 "bdev_ftl_load", 00:05:05.737 "bdev_ftl_create", 00:05:05.737 "bdev_virtio_attach_controller", 00:05:05.737 "bdev_virtio_scsi_get_devices", 00:05:05.737 "bdev_virtio_detach_controller", 00:05:05.737 "bdev_virtio_blk_set_hotplug", 00:05:05.737 "bdev_iscsi_delete", 00:05:05.737 "bdev_iscsi_create", 00:05:05.737 "bdev_iscsi_set_options", 00:05:05.737 "bdev_uring_delete", 00:05:05.737 "bdev_uring_rescan", 00:05:05.737 "bdev_uring_create", 00:05:05.737 "accel_error_inject_error", 00:05:05.737 "ioat_scan_accel_module", 00:05:05.737 "dsa_scan_accel_module", 00:05:05.737 "iaa_scan_accel_module", 00:05:05.737 "keyring_file_remove_key", 00:05:05.737 "keyring_file_add_key", 00:05:05.737 "keyring_linux_set_options", 00:05:05.737 "iscsi_get_histogram", 00:05:05.737 "iscsi_enable_histogram", 00:05:05.737 "iscsi_set_options", 00:05:05.737 "iscsi_get_auth_groups", 00:05:05.737 "iscsi_auth_group_remove_secret", 00:05:05.737 "iscsi_auth_group_add_secret", 00:05:05.737 "iscsi_delete_auth_group", 00:05:05.737 "iscsi_create_auth_group", 00:05:05.737 "iscsi_set_discovery_auth", 00:05:05.737 "iscsi_get_options", 00:05:05.737 "iscsi_target_node_request_logout", 00:05:05.737 "iscsi_target_node_set_redirect", 00:05:05.737 "iscsi_target_node_set_auth", 00:05:05.737 "iscsi_target_node_add_lun", 00:05:05.737 "iscsi_get_stats", 00:05:05.737 "iscsi_get_connections", 00:05:05.737 "iscsi_portal_group_set_auth", 00:05:05.737 "iscsi_start_portal_group", 00:05:05.737 "iscsi_delete_portal_group", 00:05:05.737 "iscsi_create_portal_group", 00:05:05.737 "iscsi_get_portal_groups", 00:05:05.737 "iscsi_delete_target_node", 00:05:05.737 "iscsi_target_node_remove_pg_ig_maps", 00:05:05.737 "iscsi_target_node_add_pg_ig_maps", 00:05:05.737 "iscsi_create_target_node", 00:05:05.737 "iscsi_get_target_nodes", 00:05:05.737 "iscsi_delete_initiator_group", 00:05:05.737 "iscsi_initiator_group_remove_initiators", 00:05:05.737 "iscsi_initiator_group_add_initiators", 00:05:05.737 "iscsi_create_initiator_group", 00:05:05.737 "iscsi_get_initiator_groups", 00:05:05.737 "nvmf_set_crdt", 00:05:05.737 "nvmf_set_config", 00:05:05.737 "nvmf_set_max_subsystems", 00:05:05.737 "nvmf_stop_mdns_prr", 00:05:05.737 "nvmf_publish_mdns_prr", 00:05:05.737 "nvmf_subsystem_get_listeners", 00:05:05.737 "nvmf_subsystem_get_qpairs", 00:05:05.737 "nvmf_subsystem_get_controllers", 00:05:05.737 "nvmf_get_stats", 00:05:05.737 "nvmf_get_transports", 00:05:05.737 "nvmf_create_transport", 00:05:05.737 "nvmf_get_targets", 00:05:05.737 "nvmf_delete_target", 00:05:05.737 "nvmf_create_target", 00:05:05.737 "nvmf_subsystem_allow_any_host", 00:05:05.737 "nvmf_subsystem_remove_host", 00:05:05.737 "nvmf_subsystem_add_host", 00:05:05.737 "nvmf_ns_remove_host", 00:05:05.737 "nvmf_ns_add_host", 00:05:05.737 "nvmf_subsystem_remove_ns", 00:05:05.737 "nvmf_subsystem_add_ns", 00:05:05.737 "nvmf_subsystem_listener_set_ana_state", 00:05:05.737 "nvmf_discovery_get_referrals", 00:05:05.737 "nvmf_discovery_remove_referral", 00:05:05.737 "nvmf_discovery_add_referral", 00:05:05.737 "nvmf_subsystem_remove_listener", 00:05:05.737 "nvmf_subsystem_add_listener", 00:05:05.737 "nvmf_delete_subsystem", 00:05:05.737 "nvmf_create_subsystem", 00:05:05.737 "nvmf_get_subsystems", 00:05:05.737 "env_dpdk_get_mem_stats", 00:05:05.738 "nbd_get_disks", 00:05:05.738 "nbd_stop_disk", 00:05:05.738 "nbd_start_disk", 00:05:05.738 "ublk_recover_disk", 00:05:05.738 "ublk_get_disks", 00:05:05.738 "ublk_stop_disk", 00:05:05.738 "ublk_start_disk", 00:05:05.738 "ublk_destroy_target", 00:05:05.738 "ublk_create_target", 00:05:05.738 "virtio_blk_create_transport", 00:05:05.738 "virtio_blk_get_transports", 00:05:05.738 "vhost_controller_set_coalescing", 00:05:05.738 "vhost_get_controllers", 00:05:05.738 "vhost_delete_controller", 00:05:05.738 "vhost_create_blk_controller", 00:05:05.738 "vhost_scsi_controller_remove_target", 00:05:05.738 "vhost_scsi_controller_add_target", 00:05:05.738 "vhost_start_scsi_controller", 00:05:05.738 "vhost_create_scsi_controller", 00:05:05.738 "thread_set_cpumask", 00:05:05.738 "framework_get_governor", 00:05:05.738 "framework_get_scheduler", 00:05:05.738 "framework_set_scheduler", 00:05:05.738 "framework_get_reactors", 00:05:05.738 "thread_get_io_channels", 00:05:05.738 "thread_get_pollers", 00:05:05.738 "thread_get_stats", 00:05:05.738 "framework_monitor_context_switch", 00:05:05.738 "spdk_kill_instance", 00:05:05.738 "log_enable_timestamps", 00:05:05.738 "log_get_flags", 00:05:05.738 "log_clear_flag", 00:05:05.738 "log_set_flag", 00:05:05.738 "log_get_level", 00:05:05.738 "log_set_level", 00:05:05.738 "log_get_print_level", 00:05:05.738 "log_set_print_level", 00:05:05.738 "framework_enable_cpumask_locks", 00:05:05.738 "framework_disable_cpumask_locks", 00:05:05.738 "framework_wait_init", 00:05:05.738 "framework_start_init", 00:05:05.738 "scsi_get_devices", 00:05:05.738 "bdev_get_histogram", 00:05:05.738 "bdev_enable_histogram", 00:05:05.738 "bdev_set_qos_limit", 00:05:05.738 "bdev_set_qd_sampling_period", 00:05:05.738 "bdev_get_bdevs", 00:05:05.738 "bdev_reset_iostat", 00:05:05.738 "bdev_get_iostat", 00:05:05.738 "bdev_examine", 00:05:05.738 "bdev_wait_for_examine", 00:05:05.738 "bdev_set_options", 00:05:05.738 "notify_get_notifications", 00:05:05.738 "notify_get_types", 00:05:05.738 "accel_get_stats", 00:05:05.738 "accel_set_options", 00:05:05.738 "accel_set_driver", 00:05:05.738 "accel_crypto_key_destroy", 00:05:05.738 "accel_crypto_keys_get", 00:05:05.738 "accel_crypto_key_create", 00:05:05.738 "accel_assign_opc", 00:05:05.738 "accel_get_module_info", 00:05:05.738 "accel_get_opc_assignments", 00:05:05.738 "vmd_rescan", 00:05:05.738 "vmd_remove_device", 00:05:05.738 "vmd_enable", 00:05:05.738 "sock_get_default_impl", 00:05:05.738 "sock_set_default_impl", 00:05:05.738 "sock_impl_set_options", 00:05:05.738 "sock_impl_get_options", 00:05:05.738 "iobuf_get_stats", 00:05:05.738 "iobuf_set_options", 00:05:05.738 "framework_get_pci_devices", 00:05:05.738 "framework_get_config", 00:05:05.738 "framework_get_subsystems", 00:05:05.738 "trace_get_info", 00:05:05.738 "trace_get_tpoint_group_mask", 00:05:05.738 "trace_disable_tpoint_group", 00:05:05.738 "trace_enable_tpoint_group", 00:05:05.738 "trace_clear_tpoint_mask", 00:05:05.738 "trace_set_tpoint_mask", 00:05:05.738 "keyring_get_keys", 00:05:05.738 "spdk_get_version", 00:05:05.738 "rpc_get_methods" 00:05:05.738 ] 00:05:05.738 21:18:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.738 21:18:38 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.738 21:18:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.738 21:18:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.738 21:18:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59761 00:05:05.738 21:18:38 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59761 ']' 00:05:05.738 21:18:38 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59761 00:05:05.738 21:18:38 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:05.738 21:18:38 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.738 21:18:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59761 00:05:05.738 21:18:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.738 21:18:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.738 killing process with pid 59761 00:05:05.738 21:18:39 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59761' 00:05:05.738 21:18:39 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59761 00:05:05.738 21:18:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59761 00:05:05.996 ************************************ 00:05:05.996 END TEST spdkcli_tcp 00:05:05.996 ************************************ 00:05:05.996 00:05:05.996 real 0m1.635s 00:05:05.996 user 0m2.838s 00:05:05.996 sys 0m0.459s 00:05:05.996 21:18:39 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.996 21:18:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:06.253 21:18:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:06.253 21:18:39 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.253 21:18:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.253 21:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.253 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:05:06.253 ************************************ 00:05:06.253 START TEST dpdk_mem_utility 00:05:06.253 ************************************ 00:05:06.253 21:18:39 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:06.253 * Looking for test storage... 00:05:06.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:06.253 21:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:06.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.253 21:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59847 00:05:06.253 21:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59847 00:05:06.253 21:18:39 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59847 ']' 00:05:06.253 21:18:39 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.253 21:18:39 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.253 21:18:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.253 21:18:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.253 21:18:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.253 21:18:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.253 [2024-07-15 21:18:39.591661] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:06.253 [2024-07-15 21:18:39.591730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59847 ] 00:05:06.510 [2024-07-15 21:18:39.731585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.510 [2024-07-15 21:18:39.827790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.510 [2024-07-15 21:18:39.868878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.076 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.076 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:07.076 21:18:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:07.076 21:18:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:07.076 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.076 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.076 { 00:05:07.076 "filename": "/tmp/spdk_mem_dump.txt" 00:05:07.076 } 00:05:07.076 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.076 21:18:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.335 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:07.335 1 heaps totaling size 814.000000 MiB 00:05:07.335 size: 814.000000 MiB heap id: 0 00:05:07.335 end heaps---------- 00:05:07.335 8 mempools totaling size 598.116089 MiB 00:05:07.335 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:07.335 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:07.335 size: 84.521057 MiB name: bdev_io_59847 00:05:07.335 size: 51.011292 MiB name: evtpool_59847 00:05:07.335 size: 50.003479 MiB name: msgpool_59847 00:05:07.335 size: 21.763794 MiB name: PDU_Pool 00:05:07.335 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:07.335 size: 0.026123 MiB name: Session_Pool 00:05:07.335 end mempools------- 00:05:07.335 6 memzones totaling size 4.142822 MiB 00:05:07.335 size: 1.000366 MiB name: RG_ring_0_59847 00:05:07.335 size: 1.000366 MiB name: RG_ring_1_59847 00:05:07.335 size: 1.000366 MiB name: RG_ring_4_59847 00:05:07.335 size: 1.000366 MiB name: RG_ring_5_59847 00:05:07.335 size: 0.125366 MiB name: RG_ring_2_59847 00:05:07.335 size: 0.015991 MiB name: RG_ring_3_59847 00:05:07.335 end memzones------- 00:05:07.335 21:18:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:07.335 heap id: 0 total size: 814.000000 MiB number of busy elements: 298 number of free elements: 15 00:05:07.335 list of free elements. size: 12.472290 MiB 00:05:07.335 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:07.335 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:07.335 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:07.335 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:07.335 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:07.335 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:07.335 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:07.335 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:07.335 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:07.335 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:05:07.335 element at address: 0x20000b200000 with size: 0.489807 MiB 00:05:07.335 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:07.335 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:07.335 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:07.335 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:07.335 list of standard malloc elements. size: 199.265137 MiB 00:05:07.335 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:07.335 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:07.335 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:07.335 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:07.335 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:07.335 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:07.335 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:07.335 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:07.335 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:07.335 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:07.335 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:07.336 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:07.337 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:07.337 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:07.337 list of memzone associated elements. size: 602.262573 MiB 00:05:07.337 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:07.337 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:07.337 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:07.337 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:07.337 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:07.337 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59847_0 00:05:07.337 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:07.337 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59847_0 00:05:07.337 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:07.337 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59847_0 00:05:07.337 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:07.337 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:07.337 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:07.338 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:07.338 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:07.338 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59847 00:05:07.338 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:07.338 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59847 00:05:07.338 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:07.338 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59847 00:05:07.338 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:07.338 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:07.338 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:07.338 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:07.338 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:07.338 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:07.338 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:07.338 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:07.338 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:07.338 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59847 00:05:07.338 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:07.338 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59847 00:05:07.338 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:07.338 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59847 00:05:07.338 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:07.338 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59847 00:05:07.338 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:07.338 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59847 00:05:07.338 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:07.338 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:07.338 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:07.338 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:07.338 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:07.338 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:07.338 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:07.338 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59847 00:05:07.338 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:07.338 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:07.338 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:07.338 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:07.338 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:07.338 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59847 00:05:07.338 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:07.338 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:07.338 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:07.338 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59847 00:05:07.338 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:07.338 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59847 00:05:07.338 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:07.338 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:07.338 21:18:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:07.338 21:18:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59847 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59847 ']' 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59847 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59847 00:05:07.338 killing process with pid 59847 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59847' 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59847 00:05:07.338 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59847 00:05:07.596 00:05:07.596 real 0m1.512s 00:05:07.596 user 0m1.512s 00:05:07.596 sys 0m0.430s 00:05:07.596 ************************************ 00:05:07.596 END TEST dpdk_mem_utility 00:05:07.596 ************************************ 00:05:07.596 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.596 21:18:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.855 21:18:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:07.855 21:18:40 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.855 21:18:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.855 21:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.855 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:05:07.855 ************************************ 00:05:07.855 START TEST event 00:05:07.855 ************************************ 00:05:07.855 21:18:40 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:07.855 * Looking for test storage... 00:05:07.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.855 21:18:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:07.855 21:18:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.855 21:18:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.855 21:18:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:07.855 21:18:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.855 21:18:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.855 ************************************ 00:05:07.855 START TEST event_perf 00:05:07.855 ************************************ 00:05:07.855 21:18:41 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.855 Running I/O for 1 seconds...[2024-07-15 21:18:41.150273] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:07.855 [2024-07-15 21:18:41.150353] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59924 ] 00:05:08.113 [2024-07-15 21:18:41.294276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.113 [2024-07-15 21:18:41.381910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.113 [2024-07-15 21:18:41.382092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.113 [2024-07-15 21:18:41.382125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.113 [2024-07-15 21:18:41.382130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.484 Running I/O for 1 seconds... 00:05:09.484 lcore 0: 200278 00:05:09.484 lcore 1: 200280 00:05:09.484 lcore 2: 200277 00:05:09.484 lcore 3: 200277 00:05:09.484 done. 00:05:09.484 00:05:09.484 ************************************ 00:05:09.484 END TEST event_perf 00:05:09.484 ************************************ 00:05:09.484 real 0m1.320s 00:05:09.484 user 0m4.134s 00:05:09.484 sys 0m0.062s 00:05:09.484 21:18:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.484 21:18:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.484 21:18:42 event -- common/autotest_common.sh@1142 -- # return 0 00:05:09.484 21:18:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.484 21:18:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:09.484 21:18:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.484 21:18:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.484 ************************************ 00:05:09.484 START TEST event_reactor 00:05:09.484 ************************************ 00:05:09.484 21:18:42 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:09.484 [2024-07-15 21:18:42.534738] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:09.484 [2024-07-15 21:18:42.534847] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:05:09.484 [2024-07-15 21:18:42.679956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.484 [2024-07-15 21:18:42.775008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.856 test_start 00:05:10.856 oneshot 00:05:10.856 tick 100 00:05:10.856 tick 100 00:05:10.856 tick 250 00:05:10.856 tick 100 00:05:10.856 tick 100 00:05:10.856 tick 100 00:05:10.856 tick 250 00:05:10.856 tick 500 00:05:10.856 tick 100 00:05:10.856 tick 100 00:05:10.856 tick 250 00:05:10.856 tick 100 00:05:10.856 tick 100 00:05:10.856 test_end 00:05:10.856 00:05:10.856 real 0m1.332s 00:05:10.856 user 0m1.178s 00:05:10.856 sys 0m0.047s 00:05:10.856 21:18:43 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.856 ************************************ 00:05:10.856 END TEST event_reactor 00:05:10.856 ************************************ 00:05:10.856 21:18:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:10.856 21:18:43 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.856 21:18:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.856 21:18:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:10.856 21:18:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.856 21:18:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.856 ************************************ 00:05:10.856 START TEST event_reactor_perf 00:05:10.856 ************************************ 00:05:10.856 21:18:43 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:10.856 [2024-07-15 21:18:43.948716] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:10.856 [2024-07-15 21:18:43.949126] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59987 ] 00:05:10.856 [2024-07-15 21:18:44.096423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.856 [2024-07-15 21:18:44.198096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.229 test_start 00:05:12.229 test_end 00:05:12.229 Performance: 455598 events per second 00:05:12.229 ************************************ 00:05:12.229 END TEST event_reactor_perf 00:05:12.229 ************************************ 00:05:12.229 00:05:12.229 real 0m1.348s 00:05:12.229 user 0m1.172s 00:05:12.229 sys 0m0.068s 00:05:12.229 21:18:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.229 21:18:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.229 21:18:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:12.229 21:18:45 event -- event/event.sh@49 -- # uname -s 00:05:12.229 21:18:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:12.229 21:18:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.229 21:18:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.229 21:18:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.229 21:18:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.229 ************************************ 00:05:12.229 START TEST event_scheduler 00:05:12.229 ************************************ 00:05:12.229 21:18:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:12.229 * Looking for test storage... 00:05:12.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:12.229 21:18:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:12.229 21:18:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60054 00:05:12.229 21:18:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:12.229 21:18:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.229 21:18:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60054 00:05:12.229 21:18:45 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60054 ']' 00:05:12.229 21:18:45 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.229 21:18:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.229 21:18:45 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.229 21:18:45 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.229 21:18:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.229 [2024-07-15 21:18:45.525109] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:12.229 [2024-07-15 21:18:45.525339] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60054 ] 00:05:12.486 [2024-07-15 21:18:45.667128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.486 [2024-07-15 21:18:45.823236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.486 [2024-07-15 21:18:45.823425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.486 [2024-07-15 21:18:45.823615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.486 [2024-07-15 21:18:45.823617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.051 21:18:46 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.051 21:18:46 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:13.051 21:18:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:13.051 21:18:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.051 21:18:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.051 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.051 POWER: Cannot set governor of lcore 0 to performance 00:05:13.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.051 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.051 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:13.051 POWER: Cannot set governor of lcore 0 to userspace 00:05:13.051 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:13.051 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:13.051 POWER: Unable to set Power Management Environment for lcore 0 00:05:13.051 [2024-07-15 21:18:46.365207] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:13.051 [2024-07-15 21:18:46.365224] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:13.051 [2024-07-15 21:18:46.365232] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:13.051 [2024-07-15 21:18:46.365245] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:13.051 [2024-07-15 21:18:46.365252] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:13.051 [2024-07-15 21:18:46.365259] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:13.051 21:18:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.051 21:18:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:13.051 21:18:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.051 21:18:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 [2024-07-15 21:18:46.446120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.311 [2024-07-15 21:18:46.493304] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:13.311 21:18:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:13.311 21:18:46 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.311 21:18:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 ************************************ 00:05:13.311 START TEST scheduler_create_thread 00:05:13.311 ************************************ 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 2 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 3 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 4 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 5 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 6 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 7 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 8 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 9 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.311 10 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.311 21:18:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.715 21:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.715 21:18:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:14.715 21:18:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:14.715 21:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.715 21:18:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.650 21:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.650 21:18:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:15.650 21:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.650 21:18:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.588 21:18:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.588 21:18:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:16.588 21:18:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:16.588 21:18:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.588 21:18:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.156 ************************************ 00:05:17.156 END TEST scheduler_create_thread 00:05:17.157 ************************************ 00:05:17.157 21:18:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.157 00:05:17.157 real 0m3.880s 00:05:17.157 user 0m0.023s 00:05:17.157 sys 0m0.009s 00:05:17.157 21:18:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.157 21:18:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:17.157 21:18:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:17.157 21:18:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60054 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60054 ']' 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60054 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60054 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:17.157 killing process with pid 60054 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60054' 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60054 00:05:17.157 21:18:50 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60054 00:05:17.415 [2024-07-15 21:18:50.769862] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.983 00:05:17.983 real 0m5.704s 00:05:17.983 user 0m11.583s 00:05:17.983 sys 0m0.467s 00:05:17.983 21:18:51 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.983 ************************************ 00:05:17.983 END TEST event_scheduler 00:05:17.983 ************************************ 00:05:17.984 21:18:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.984 21:18:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.984 21:18:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.984 21:18:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.984 21:18:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.984 21:18:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.984 21:18:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.984 ************************************ 00:05:17.984 START TEST app_repeat 00:05:17.984 ************************************ 00:05:17.984 21:18:51 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60159 00:05:17.984 Process app_repeat pid: 60159 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60159' 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.984 spdk_app_start Round 0 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:17.984 21:18:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60159 /var/tmp/spdk-nbd.sock 00:05:17.984 21:18:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60159 ']' 00:05:17.984 21:18:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.984 21:18:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.984 21:18:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.984 21:18:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.984 21:18:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.984 [2024-07-15 21:18:51.165299] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:17.984 [2024-07-15 21:18:51.165389] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60159 ] 00:05:17.984 [2024-07-15 21:18:51.307487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.243 [2024-07-15 21:18:51.406122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.243 [2024-07-15 21:18:51.406125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.243 [2024-07-15 21:18:51.448201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:18.859 21:18:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.859 21:18:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:18.859 21:18:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.122 Malloc0 00:05:19.122 21:18:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.122 Malloc1 00:05:19.122 21:18:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.122 21:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.123 21:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.123 21:18:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.379 /dev/nbd0 00:05:19.379 21:18:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.379 21:18:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.379 1+0 records in 00:05:19.379 1+0 records out 00:05:19.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339675 s, 12.1 MB/s 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.379 21:18:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.379 21:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.379 21:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.379 21:18:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.636 /dev/nbd1 00:05:19.636 21:18:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.636 21:18:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.636 1+0 records in 00:05:19.636 1+0 records out 00:05:19.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057986 s, 7.1 MB/s 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.636 21:18:52 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.636 21:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.636 21:18:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.636 21:18:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.636 21:18:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.636 21:18:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.894 21:18:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.894 { 00:05:19.894 "nbd_device": "/dev/nbd0", 00:05:19.894 "bdev_name": "Malloc0" 00:05:19.894 }, 00:05:19.894 { 00:05:19.894 "nbd_device": "/dev/nbd1", 00:05:19.894 "bdev_name": "Malloc1" 00:05:19.894 } 00:05:19.894 ]' 00:05:19.894 21:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.894 { 00:05:19.894 "nbd_device": "/dev/nbd0", 00:05:19.894 "bdev_name": "Malloc0" 00:05:19.894 }, 00:05:19.894 { 00:05:19.894 "nbd_device": "/dev/nbd1", 00:05:19.894 "bdev_name": "Malloc1" 00:05:19.894 } 00:05:19.894 ]' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.895 /dev/nbd1' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.895 /dev/nbd1' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.895 256+0 records in 00:05:19.895 256+0 records out 00:05:19.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513494 s, 204 MB/s 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.895 256+0 records in 00:05:19.895 256+0 records out 00:05:19.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248754 s, 42.2 MB/s 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.895 256+0 records in 00:05:19.895 256+0 records out 00:05:19.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234733 s, 44.7 MB/s 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.895 21:18:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.152 21:18:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.408 21:18:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.664 21:18:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.664 21:18:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.921 21:18:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.178 [2024-07-15 21:18:54.323951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.178 [2024-07-15 21:18:54.404152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.178 [2024-07-15 21:18:54.404152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.178 [2024-07-15 21:18:54.445599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.178 [2024-07-15 21:18:54.445680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.178 [2024-07-15 21:18:54.445692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.456 21:18:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.456 spdk_app_start Round 1 00:05:24.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.456 21:18:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:24.456 21:18:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60159 /var/tmp/spdk-nbd.sock 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60159 ']' 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.456 21:18:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:24.456 21:18:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.456 Malloc0 00:05:24.456 21:18:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.456 Malloc1 00:05:24.456 21:18:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.456 21:18:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.456 21:18:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.456 21:18:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.456 21:18:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.456 21:18:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.456 21:18:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.457 21:18:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.713 /dev/nbd0 00:05:24.714 21:18:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.714 21:18:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.714 1+0 records in 00:05:24.714 1+0 records out 00:05:24.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302991 s, 13.5 MB/s 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.714 21:18:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.714 21:18:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.714 21:18:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.714 21:18:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.970 /dev/nbd1 00:05:24.971 21:18:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.971 21:18:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.971 1+0 records in 00:05:24.971 1+0 records out 00:05:24.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303999 s, 13.5 MB/s 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:24.971 21:18:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:24.971 21:18:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.971 21:18:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.971 21:18:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.971 21:18:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.971 21:18:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.228 21:18:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.228 { 00:05:25.228 "nbd_device": "/dev/nbd0", 00:05:25.228 "bdev_name": "Malloc0" 00:05:25.228 }, 00:05:25.228 { 00:05:25.229 "nbd_device": "/dev/nbd1", 00:05:25.229 "bdev_name": "Malloc1" 00:05:25.229 } 00:05:25.229 ]' 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.229 { 00:05:25.229 "nbd_device": "/dev/nbd0", 00:05:25.229 "bdev_name": "Malloc0" 00:05:25.229 }, 00:05:25.229 { 00:05:25.229 "nbd_device": "/dev/nbd1", 00:05:25.229 "bdev_name": "Malloc1" 00:05:25.229 } 00:05:25.229 ]' 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.229 /dev/nbd1' 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.229 /dev/nbd1' 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.229 256+0 records in 00:05:25.229 256+0 records out 00:05:25.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118383 s, 88.6 MB/s 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.229 256+0 records in 00:05:25.229 256+0 records out 00:05:25.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236074 s, 44.4 MB/s 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.229 21:18:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.487 256+0 records in 00:05:25.487 256+0 records out 00:05:25.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278339 s, 37.7 MB/s 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.487 21:18:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.745 21:18:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.004 21:18:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.005 21:18:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.005 21:18:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.005 21:18:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.005 21:18:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.005 21:18:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.266 21:18:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.524 [2024-07-15 21:18:59.689344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.524 [2024-07-15 21:18:59.776469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.524 [2024-07-15 21:18:59.776476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.524 [2024-07-15 21:18:59.818906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.524 [2024-07-15 21:18:59.818977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.524 [2024-07-15 21:18:59.818988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.799 spdk_app_start Round 2 00:05:29.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.799 21:19:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.799 21:19:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:29.799 21:19:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60159 /var/tmp/spdk-nbd.sock 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60159 ']' 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.799 21:19:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:29.799 21:19:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.799 Malloc0 00:05:29.799 21:19:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.799 Malloc1 00:05:29.799 21:19:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.799 21:19:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.063 /dev/nbd0 00:05:30.063 21:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.063 21:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.063 1+0 records in 00:05:30.063 1+0 records out 00:05:30.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298638 s, 13.7 MB/s 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.063 21:19:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.063 21:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.063 21:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.063 21:19:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.320 /dev/nbd1 00:05:30.320 21:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.320 21:19:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.320 1+0 records in 00:05:30.320 1+0 records out 00:05:30.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194953 s, 21.0 MB/s 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.320 21:19:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.320 21:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.320 21:19:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.320 21:19:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.320 21:19:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.320 21:19:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.577 { 00:05:30.577 "nbd_device": "/dev/nbd0", 00:05:30.577 "bdev_name": "Malloc0" 00:05:30.577 }, 00:05:30.577 { 00:05:30.577 "nbd_device": "/dev/nbd1", 00:05:30.577 "bdev_name": "Malloc1" 00:05:30.577 } 00:05:30.577 ]' 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.577 { 00:05:30.577 "nbd_device": "/dev/nbd0", 00:05:30.577 "bdev_name": "Malloc0" 00:05:30.577 }, 00:05:30.577 { 00:05:30.577 "nbd_device": "/dev/nbd1", 00:05:30.577 "bdev_name": "Malloc1" 00:05:30.577 } 00:05:30.577 ]' 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.577 /dev/nbd1' 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.577 /dev/nbd1' 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.577 256+0 records in 00:05:30.577 256+0 records out 00:05:30.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114942 s, 91.2 MB/s 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.577 256+0 records in 00:05:30.577 256+0 records out 00:05:30.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247519 s, 42.4 MB/s 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.577 21:19:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.835 256+0 records in 00:05:30.835 256+0 records out 00:05:30.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283575 s, 37.0 MB/s 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.835 21:19:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.835 21:19:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.093 21:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.350 21:19:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.350 21:19:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.611 21:19:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.869 [2024-07-15 21:19:05.054168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.869 [2024-07-15 21:19:05.138735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.869 [2024-07-15 21:19:05.138736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.869 [2024-07-15 21:19:05.180061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.869 [2024-07-15 21:19:05.180337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.869 [2024-07-15 21:19:05.180356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.146 21:19:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60159 /var/tmp/spdk-nbd.sock 00:05:35.146 21:19:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60159 ']' 00:05:35.146 21:19:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.146 21:19:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.146 21:19:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.146 21:19:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.146 21:19:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.146 21:19:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.146 21:19:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.146 21:19:08 event.app_repeat -- event/event.sh@39 -- # killprocess 60159 00:05:35.146 21:19:08 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60159 ']' 00:05:35.146 21:19:08 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60159 00:05:35.146 21:19:08 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60159 00:05:35.147 killing process with pid 60159 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60159' 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60159 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60159 00:05:35.147 spdk_app_start is called in Round 0. 00:05:35.147 Shutdown signal received, stop current app iteration 00:05:35.147 Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 reinitialization... 00:05:35.147 spdk_app_start is called in Round 1. 00:05:35.147 Shutdown signal received, stop current app iteration 00:05:35.147 Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 reinitialization... 00:05:35.147 spdk_app_start is called in Round 2. 00:05:35.147 Shutdown signal received, stop current app iteration 00:05:35.147 Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 reinitialization... 00:05:35.147 spdk_app_start is called in Round 3. 00:05:35.147 Shutdown signal received, stop current app iteration 00:05:35.147 ************************************ 00:05:35.147 END TEST app_repeat 00:05:35.147 ************************************ 00:05:35.147 21:19:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.147 21:19:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.147 00:05:35.147 real 0m17.186s 00:05:35.147 user 0m37.304s 00:05:35.147 sys 0m2.887s 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.147 21:19:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.147 21:19:08 event -- common/autotest_common.sh@1142 -- # return 0 00:05:35.147 21:19:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.147 21:19:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.147 21:19:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.147 21:19:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.147 21:19:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.147 ************************************ 00:05:35.147 START TEST cpu_locks 00:05:35.147 ************************************ 00:05:35.147 21:19:08 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.147 * Looking for test storage... 00:05:35.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.147 21:19:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.147 21:19:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.147 21:19:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.147 21:19:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.147 21:19:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.147 21:19:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.147 21:19:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.404 ************************************ 00:05:35.404 START TEST default_locks 00:05:35.404 ************************************ 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60575 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60575 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60575 ']' 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.404 21:19:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.404 [2024-07-15 21:19:08.583474] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:35.404 [2024-07-15 21:19:08.583680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60575 ] 00:05:35.404 [2024-07-15 21:19:08.724934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.662 [2024-07-15 21:19:08.822828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.662 [2024-07-15 21:19:08.863656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.226 21:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.226 21:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:36.226 21:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60575 00:05:36.226 21:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60575 00:05:36.226 21:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.790 21:19:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60575 00:05:36.790 21:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60575 ']' 00:05:36.790 21:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60575 00:05:36.790 21:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:36.790 21:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.790 21:19:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60575 00:05:36.790 killing process with pid 60575 00:05:36.790 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.790 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.790 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60575' 00:05:36.790 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60575 00:05:36.790 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60575 00:05:37.046 21:19:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60575 00:05:37.046 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:37.046 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60575 00:05:37.046 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:37.046 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.046 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:37.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.047 ERROR: process (pid: 60575) is no longer running 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60575 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60575 ']' 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.047 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60575) - No such process 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.047 00:05:37.047 real 0m1.824s 00:05:37.047 user 0m1.912s 00:05:37.047 sys 0m0.578s 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.047 ************************************ 00:05:37.047 END TEST default_locks 00:05:37.047 ************************************ 00:05:37.047 21:19:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.047 21:19:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:37.047 21:19:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.047 21:19:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.047 21:19:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.047 21:19:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.304 ************************************ 00:05:37.304 START TEST default_locks_via_rpc 00:05:37.304 ************************************ 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60627 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60627 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60627 ']' 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.304 21:19:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.304 [2024-07-15 21:19:10.483279] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:37.304 [2024-07-15 21:19:10.483363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:05:37.304 [2024-07-15 21:19:10.613080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.570 [2024-07-15 21:19:10.712037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.571 [2024-07-15 21:19:10.755059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60627 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60627 00:05:38.155 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60627 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60627 ']' 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60627 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60627 00:05:38.719 killing process with pid 60627 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60627' 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60627 00:05:38.719 21:19:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60627 00:05:38.977 00:05:38.977 real 0m1.803s 00:05:38.977 user 0m1.925s 00:05:38.977 sys 0m0.560s 00:05:38.977 21:19:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.977 ************************************ 00:05:38.977 END TEST default_locks_via_rpc 00:05:38.977 ************************************ 00:05:38.977 21:19:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.977 21:19:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.977 21:19:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:38.977 21:19:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.977 21:19:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.977 21:19:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.977 ************************************ 00:05:38.977 START TEST non_locking_app_on_locked_coremask 00:05:38.977 ************************************ 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60673 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60673 /var/tmp/spdk.sock 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60673 ']' 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.977 21:19:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.235 [2024-07-15 21:19:12.360641] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:39.235 [2024-07-15 21:19:12.360718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60673 ] 00:05:39.235 [2024-07-15 21:19:12.502517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.493 [2024-07-15 21:19:12.605967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.493 [2024-07-15 21:19:12.649095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60689 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60689 /var/tmp/spdk2.sock 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60689 ']' 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.073 21:19:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.073 [2024-07-15 21:19:13.281401] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:40.073 [2024-07-15 21:19:13.281479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60689 ] 00:05:40.073 [2024-07-15 21:19:13.419377] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.073 [2024-07-15 21:19:13.419435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.333 [2024-07-15 21:19:13.620396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.591 [2024-07-15 21:19:13.705721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.848 21:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.849 21:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:40.849 21:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60673 00:05:40.849 21:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60673 00:05:40.849 21:19:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60673 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60673 ']' 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60673 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60673 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.784 killing process with pid 60673 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60673' 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60673 00:05:41.784 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60673 00:05:42.352 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60689 00:05:42.352 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60689 ']' 00:05:42.353 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60689 00:05:42.353 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:42.353 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.612 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60689 00:05:42.612 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.612 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.612 killing process with pid 60689 00:05:42.612 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60689' 00:05:42.612 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60689 00:05:42.612 21:19:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60689 00:05:42.870 00:05:42.871 real 0m3.767s 00:05:42.871 user 0m4.138s 00:05:42.871 sys 0m1.080s 00:05:42.871 21:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.871 21:19:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.871 ************************************ 00:05:42.871 END TEST non_locking_app_on_locked_coremask 00:05:42.871 ************************************ 00:05:42.871 21:19:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:42.871 21:19:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.871 21:19:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.871 21:19:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.871 21:19:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.871 ************************************ 00:05:42.871 START TEST locking_app_on_unlocked_coremask 00:05:42.871 ************************************ 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60756 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60756 /var/tmp/spdk.sock 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60756 ']' 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.871 21:19:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.871 [2024-07-15 21:19:16.175456] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:42.871 [2024-07-15 21:19:16.175523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60756 ] 00:05:43.130 [2024-07-15 21:19:16.300134] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.130 [2024-07-15 21:19:16.300180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.130 [2024-07-15 21:19:16.394805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.130 [2024-07-15 21:19:16.436033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60766 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60766 /var/tmp/spdk2.sock 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60766 ']' 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.698 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.698 [2024-07-15 21:19:17.061231] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:43.698 [2024-07-15 21:19:17.061298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60766 ] 00:05:43.957 [2024-07-15 21:19:17.196552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.215 [2024-07-15 21:19:17.395636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.215 [2024-07-15 21:19:17.481292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.782 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.782 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.782 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60766 00:05:44.782 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60766 00:05:44.782 21:19:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60756 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60756 ']' 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60756 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60756 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.718 killing process with pid 60756 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60756' 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60756 00:05:45.718 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60756 00:05:46.652 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60766 00:05:46.652 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60766 ']' 00:05:46.652 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60766 00:05:46.652 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.652 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.652 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60766 00:05:46.652 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.653 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.653 killing process with pid 60766 00:05:46.653 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60766' 00:05:46.653 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60766 00:05:46.653 21:19:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60766 00:05:46.910 00:05:46.910 real 0m3.896s 00:05:46.910 user 0m4.284s 00:05:46.910 sys 0m1.115s 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.910 ************************************ 00:05:46.910 END TEST locking_app_on_unlocked_coremask 00:05:46.910 ************************************ 00:05:46.910 21:19:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.910 21:19:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.910 21:19:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.910 21:19:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.910 21:19:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.910 ************************************ 00:05:46.910 START TEST locking_app_on_locked_coremask 00:05:46.910 ************************************ 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60833 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60833 /var/tmp/spdk.sock 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60833 ']' 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.910 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.910 [2024-07-15 21:19:20.150669] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:46.910 [2024-07-15 21:19:20.150732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60833 ] 00:05:47.168 [2024-07-15 21:19:20.292403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.168 [2024-07-15 21:19:20.381647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.168 [2024-07-15 21:19:20.422986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.732 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.732 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.732 21:19:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60848 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60848 /var/tmp/spdk2.sock 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60848 /var/tmp/spdk2.sock 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60848 /var/tmp/spdk2.sock 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60848 ']' 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.732 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.732 [2024-07-15 21:19:21.057177] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:47.732 [2024-07-15 21:19:21.057241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60848 ] 00:05:47.991 [2024-07-15 21:19:21.191998] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60833 has claimed it. 00:05:47.991 [2024-07-15 21:19:21.192058] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.561 ERROR: process (pid: 60848) is no longer running 00:05:48.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60848) - No such process 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60833 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60833 00:05:48.561 21:19:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60833 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60833 ']' 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60833 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60833 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.129 killing process with pid 60833 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60833' 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60833 00:05:49.129 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60833 00:05:49.387 00:05:49.387 real 0m2.480s 00:05:49.387 user 0m2.771s 00:05:49.387 sys 0m0.635s 00:05:49.387 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.387 21:19:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.387 ************************************ 00:05:49.387 END TEST locking_app_on_locked_coremask 00:05:49.387 ************************************ 00:05:49.387 21:19:22 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.387 21:19:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.387 21:19:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.387 21:19:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.387 21:19:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.387 ************************************ 00:05:49.387 START TEST locking_overlapped_coremask 00:05:49.387 ************************************ 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60895 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60895 /var/tmp/spdk.sock 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60895 ']' 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.387 21:19:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.387 [2024-07-15 21:19:22.709361] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:49.387 [2024-07-15 21:19:22.709430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60895 ] 00:05:49.646 [2024-07-15 21:19:22.839019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.646 [2024-07-15 21:19:22.932510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.646 [2024-07-15 21:19:22.932695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.646 [2024-07-15 21:19:22.932696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.646 [2024-07-15 21:19:22.973803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60913 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60913 /var/tmp/spdk2.sock 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60913 /var/tmp/spdk2.sock 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60913 /var/tmp/spdk2.sock 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 60913 ']' 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.211 21:19:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.468 [2024-07-15 21:19:23.602494] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:50.468 [2024-07-15 21:19:23.602561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60913 ] 00:05:50.468 [2024-07-15 21:19:23.736749] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60895 has claimed it. 00:05:50.468 [2024-07-15 21:19:23.736804] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.033 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60913) - No such process 00:05:51.033 ERROR: process (pid: 60913) is no longer running 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60895 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 60895 ']' 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 60895 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60895 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.033 killing process with pid 60895 00:05:51.033 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.034 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60895' 00:05:51.034 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 60895 00:05:51.034 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 60895 00:05:51.291 00:05:51.291 real 0m2.010s 00:05:51.291 user 0m5.485s 00:05:51.291 sys 0m0.408s 00:05:51.291 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.291 21:19:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.291 ************************************ 00:05:51.291 END TEST locking_overlapped_coremask 00:05:51.291 ************************************ 00:05:51.551 21:19:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:51.551 21:19:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:51.551 21:19:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.551 21:19:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.551 21:19:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.551 ************************************ 00:05:51.551 START TEST locking_overlapped_coremask_via_rpc 00:05:51.551 ************************************ 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60953 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60953 /var/tmp/spdk.sock 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60953 ']' 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.551 21:19:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.551 [2024-07-15 21:19:24.786989] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:51.551 [2024-07-15 21:19:24.787058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60953 ] 00:05:51.551 [2024-07-15 21:19:24.915705] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.551 [2024-07-15 21:19:24.915995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.810 [2024-07-15 21:19:25.008652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.810 [2024-07-15 21:19:25.008836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.810 [2024-07-15 21:19:25.008866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.810 [2024-07-15 21:19:25.049936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60971 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60971 /var/tmp/spdk2.sock 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60971 ']' 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.378 21:19:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.696 [2024-07-15 21:19:25.761589] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:52.696 [2024-07-15 21:19:25.761682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60971 ] 00:05:52.696 [2024-07-15 21:19:25.901040] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.696 [2024-07-15 21:19:25.901096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.955 [2024-07-15 21:19:26.114507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.955 [2024-07-15 21:19:26.114563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.955 [2024-07-15 21:19:26.114563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.955 [2024-07-15 21:19:26.199886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.549 [2024-07-15 21:19:26.705921] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60953 has claimed it. 00:05:53.549 request: 00:05:53.549 { 00:05:53.549 "method": "framework_enable_cpumask_locks", 00:05:53.549 "req_id": 1 00:05:53.549 } 00:05:53.549 Got JSON-RPC error response 00:05:53.549 response: 00:05:53.549 { 00:05:53.549 "code": -32603, 00:05:53.549 "message": "Failed to claim CPU core: 2" 00:05:53.549 } 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60953 /var/tmp/spdk.sock 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60953 ']' 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.549 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60971 /var/tmp/spdk2.sock 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60971 ']' 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.809 21:19:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.068 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.068 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.068 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:54.069 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.069 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.069 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.069 00:05:54.069 real 0m2.453s 00:05:54.069 user 0m1.175s 00:05:54.069 sys 0m0.200s 00:05:54.069 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.069 21:19:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.069 ************************************ 00:05:54.069 END TEST locking_overlapped_coremask_via_rpc 00:05:54.069 ************************************ 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.069 21:19:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:54.069 21:19:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60953 ]] 00:05:54.069 21:19:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60953 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60953 ']' 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60953 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60953 00:05:54.069 killing process with pid 60953 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60953' 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60953 00:05:54.069 21:19:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60953 00:05:54.328 21:19:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60971 ]] 00:05:54.328 21:19:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60971 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60971 ']' 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60971 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60971 00:05:54.328 killing process with pid 60971 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60971' 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 60971 00:05:54.328 21:19:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 60971 00:05:54.587 21:19:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.587 21:19:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:54.587 21:19:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60953 ]] 00:05:54.587 21:19:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60953 00:05:54.587 21:19:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60953 ']' 00:05:54.587 21:19:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60953 00:05:54.587 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60953) - No such process 00:05:54.587 Process with pid 60953 is not found 00:05:54.587 21:19:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60953 is not found' 00:05:54.587 21:19:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60971 ]] 00:05:54.587 21:19:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60971 00:05:54.587 21:19:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 60971 ']' 00:05:54.587 21:19:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 60971 00:05:54.587 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (60971) - No such process 00:05:54.587 Process with pid 60971 is not found 00:05:54.587 21:19:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 60971 is not found' 00:05:54.587 21:19:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.847 00:05:54.847 real 0m19.578s 00:05:54.847 user 0m33.156s 00:05:54.847 sys 0m5.490s 00:05:54.847 ************************************ 00:05:54.847 END TEST cpu_locks 00:05:54.847 ************************************ 00:05:54.847 21:19:27 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.847 21:19:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.847 21:19:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.847 00:05:54.847 real 0m47.032s 00:05:54.847 user 1m28.728s 00:05:54.847 sys 0m9.383s 00:05:54.847 ************************************ 00:05:54.847 END TEST event 00:05:54.847 ************************************ 00:05:54.847 21:19:28 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.847 21:19:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.847 21:19:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.847 21:19:28 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:54.847 21:19:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.847 21:19:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.847 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:05:54.847 ************************************ 00:05:54.847 START TEST thread 00:05:54.847 ************************************ 00:05:54.847 21:19:28 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:54.847 * Looking for test storage... 00:05:55.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:55.106 21:19:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:55.106 21:19:28 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:55.106 21:19:28 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.106 21:19:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.106 ************************************ 00:05:55.106 START TEST thread_poller_perf 00:05:55.106 ************************************ 00:05:55.106 21:19:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:55.106 [2024-07-15 21:19:28.263233] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:55.106 [2024-07-15 21:19:28.263334] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61088 ] 00:05:55.106 [2024-07-15 21:19:28.406762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.366 [2024-07-15 21:19:28.504014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.366 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:56.303 ====================================== 00:05:56.303 busy:2501346044 (cyc) 00:05:56.303 total_run_count: 405000 00:05:56.303 tsc_hz: 2490000000 (cyc) 00:05:56.303 ====================================== 00:05:56.303 poller_cost: 6176 (cyc), 2480 (nsec) 00:05:56.303 00:05:56.303 real 0m1.343s 00:05:56.303 user 0m1.184s 00:05:56.303 sys 0m0.052s 00:05:56.303 21:19:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.303 21:19:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 END TEST thread_poller_perf 00:05:56.303 ************************************ 00:05:56.303 21:19:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:56.303 21:19:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.303 21:19:29 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:56.303 21:19:29 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.303 21:19:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.303 ************************************ 00:05:56.303 START TEST thread_poller_perf 00:05:56.303 ************************************ 00:05:56.303 21:19:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.562 [2024-07-15 21:19:29.676016] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:56.562 [2024-07-15 21:19:29.676110] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61129 ] 00:05:56.562 [2024-07-15 21:19:29.818094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.562 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:56.562 [2024-07-15 21:19:29.914864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.961 ====================================== 00:05:57.961 busy:2492204080 (cyc) 00:05:57.961 total_run_count: 5343000 00:05:57.961 tsc_hz: 2490000000 (cyc) 00:05:57.961 ====================================== 00:05:57.962 poller_cost: 466 (cyc), 187 (nsec) 00:05:57.962 00:05:57.962 real 0m1.333s 00:05:57.962 user 0m1.166s 00:05:57.962 sys 0m0.060s 00:05:57.962 21:19:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.962 21:19:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.962 ************************************ 00:05:57.962 END TEST thread_poller_perf 00:05:57.962 ************************************ 00:05:57.962 21:19:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:57.962 21:19:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:57.962 ************************************ 00:05:57.962 END TEST thread 00:05:57.962 ************************************ 00:05:57.962 00:05:57.962 real 0m2.944s 00:05:57.962 user 0m2.458s 00:05:57.962 sys 0m0.280s 00:05:57.962 21:19:31 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.962 21:19:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.962 21:19:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.962 21:19:31 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:57.962 21:19:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.962 21:19:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.962 21:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.962 ************************************ 00:05:57.962 START TEST accel 00:05:57.962 ************************************ 00:05:57.962 21:19:31 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:57.962 * Looking for test storage... 00:05:57.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:57.962 21:19:31 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:57.962 21:19:31 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:57.962 21:19:31 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.962 21:19:31 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61198 00:05:57.962 21:19:31 accel -- accel/accel.sh@63 -- # waitforlisten 61198 00:05:57.962 21:19:31 accel -- common/autotest_common.sh@829 -- # '[' -z 61198 ']' 00:05:57.962 21:19:31 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.962 21:19:31 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:57.962 21:19:31 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:57.962 21:19:31 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.962 21:19:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.962 21:19:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.962 21:19:31 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.962 21:19:31 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.962 21:19:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.962 21:19:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.962 21:19:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.962 21:19:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.962 21:19:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:57.962 21:19:31 accel -- accel/accel.sh@41 -- # jq -r . 00:05:57.962 [2024-07-15 21:19:31.292025] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:57.962 [2024-07-15 21:19:31.292233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61198 ] 00:05:58.220 [2024-07-15 21:19:31.433691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.220 [2024-07-15 21:19:31.530003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.220 [2024-07-15 21:19:31.571572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.786 21:19:32 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.786 21:19:32 accel -- common/autotest_common.sh@862 -- # return 0 00:05:58.786 21:19:32 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:58.786 21:19:32 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:58.786 21:19:32 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:58.786 21:19:32 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:58.786 21:19:32 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:58.786 21:19:32 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:58.786 21:19:32 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.786 21:19:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.786 21:19:32 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:58.786 21:19:32 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.786 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.786 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:58.786 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:58.786 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.786 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.786 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:58.786 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:58.786 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:58.786 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:58.786 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:58.786 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:58.786 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # IFS== 00:05:59.045 21:19:32 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:59.045 21:19:32 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:59.045 21:19:32 accel -- accel/accel.sh@75 -- # killprocess 61198 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@948 -- # '[' -z 61198 ']' 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@952 -- # kill -0 61198 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@953 -- # uname 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61198 00:05:59.045 killing process with pid 61198 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61198' 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@967 -- # kill 61198 00:05:59.045 21:19:32 accel -- common/autotest_common.sh@972 -- # wait 61198 00:05:59.327 21:19:32 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:59.327 21:19:32 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:59.327 21:19:32 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:59.327 21:19:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.327 21:19:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.327 21:19:32 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:59.327 21:19:32 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:59.327 21:19:32 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:59.327 21:19:32 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.327 21:19:32 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.327 21:19:32 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.327 21:19:32 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.327 21:19:32 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.328 21:19:32 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:59.328 21:19:32 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:59.328 21:19:32 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.328 21:19:32 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:59.328 21:19:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.328 21:19:32 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:59.328 21:19:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:59.328 21:19:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.328 21:19:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.328 ************************************ 00:05:59.328 START TEST accel_missing_filename 00:05:59.328 ************************************ 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.328 21:19:32 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:59.328 21:19:32 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:59.328 [2024-07-15 21:19:32.646444] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:59.328 [2024-07-15 21:19:32.646526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61255 ] 00:05:59.586 [2024-07-15 21:19:32.786563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.586 [2024-07-15 21:19:32.868161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.586 [2024-07-15 21:19:32.910249] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.844 [2024-07-15 21:19:32.969588] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:59.844 A filename is required. 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.844 00:05:59.844 real 0m0.431s 00:05:59.844 user 0m0.272s 00:05:59.844 sys 0m0.097s 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.844 21:19:33 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:59.844 ************************************ 00:05:59.844 END TEST accel_missing_filename 00:05:59.844 ************************************ 00:05:59.844 21:19:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.844 21:19:33 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.844 21:19:33 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:59.844 21:19:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.844 21:19:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.844 ************************************ 00:05:59.844 START TEST accel_compress_verify 00:05:59.844 ************************************ 00:05:59.844 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.844 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:59.845 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.845 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:59.845 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.845 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:59.845 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.845 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:59.845 21:19:33 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:59.845 [2024-07-15 21:19:33.141070] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:05:59.845 [2024-07-15 21:19:33.141319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61274 ] 00:06:00.102 [2024-07-15 21:19:33.283216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.102 [2024-07-15 21:19:33.371890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.102 [2024-07-15 21:19:33.418017] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.358 [2024-07-15 21:19:33.487947] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:00.358 00:06:00.358 Compression does not support the verify option, aborting. 00:06:00.358 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:00.358 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.358 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:00.358 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:00.359 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:00.359 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.359 00:06:00.359 real 0m0.485s 00:06:00.359 user 0m0.310s 00:06:00.359 sys 0m0.111s 00:06:00.359 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.359 21:19:33 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 ************************************ 00:06:00.359 END TEST accel_compress_verify 00:06:00.359 ************************************ 00:06:00.359 21:19:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.359 21:19:33 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:00.359 21:19:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:00.359 21:19:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.359 21:19:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 ************************************ 00:06:00.359 START TEST accel_wrong_workload 00:06:00.359 ************************************ 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:00.359 21:19:33 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:00.359 Unsupported workload type: foobar 00:06:00.359 [2024-07-15 21:19:33.689241] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:00.359 accel_perf options: 00:06:00.359 [-h help message] 00:06:00.359 [-q queue depth per core] 00:06:00.359 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:00.359 [-T number of threads per core 00:06:00.359 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:00.359 [-t time in seconds] 00:06:00.359 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:00.359 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:00.359 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:00.359 [-l for compress/decompress workloads, name of uncompressed input file 00:06:00.359 [-S for crc32c workload, use this seed value (default 0) 00:06:00.359 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:00.359 [-f for fill workload, use this BYTE value (default 255) 00:06:00.359 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:00.359 [-y verify result if this switch is on] 00:06:00.359 [-a tasks to allocate per core (default: same value as -q)] 00:06:00.359 Can be used to spread operations across a wider range of memory. 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.359 00:06:00.359 real 0m0.039s 00:06:00.359 user 0m0.021s 00:06:00.359 sys 0m0.017s 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.359 ************************************ 00:06:00.359 21:19:33 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:00.359 END TEST accel_wrong_workload 00:06:00.359 ************************************ 00:06:00.615 21:19:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.615 21:19:33 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:00.615 21:19:33 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:00.615 21:19:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.615 21:19:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.615 ************************************ 00:06:00.615 START TEST accel_negative_buffers 00:06:00.615 ************************************ 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:00.615 21:19:33 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:00.615 -x option must be non-negative. 00:06:00.615 [2024-07-15 21:19:33.792680] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:00.615 accel_perf options: 00:06:00.615 [-h help message] 00:06:00.615 [-q queue depth per core] 00:06:00.615 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:00.615 [-T number of threads per core 00:06:00.615 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:00.615 [-t time in seconds] 00:06:00.615 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:00.615 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:00.615 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:00.615 [-l for compress/decompress workloads, name of uncompressed input file 00:06:00.615 [-S for crc32c workload, use this seed value (default 0) 00:06:00.615 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:00.615 [-f for fill workload, use this BYTE value (default 255) 00:06:00.615 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:00.615 [-y verify result if this switch is on] 00:06:00.615 [-a tasks to allocate per core (default: same value as -q)] 00:06:00.615 Can be used to spread operations across a wider range of memory. 00:06:00.615 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:00.615 ************************************ 00:06:00.615 END TEST accel_negative_buffers 00:06:00.615 ************************************ 00:06:00.616 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.616 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.616 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.616 00:06:00.616 real 0m0.039s 00:06:00.616 user 0m0.022s 00:06:00.616 sys 0m0.017s 00:06:00.616 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.616 21:19:33 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:00.616 21:19:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.616 21:19:33 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:00.616 21:19:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:00.616 21:19:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.616 21:19:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.616 ************************************ 00:06:00.616 START TEST accel_crc32c 00:06:00.616 ************************************ 00:06:00.616 21:19:33 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:00.616 21:19:33 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:00.616 [2024-07-15 21:19:33.896015] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:00.616 [2024-07-15 21:19:33.896237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61338 ] 00:06:00.872 [2024-07-15 21:19:34.038261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.872 [2024-07-15 21:19:34.132177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.872 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.873 21:19:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:02.247 21:19:35 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.247 ************************************ 00:06:02.247 END TEST accel_crc32c 00:06:02.247 ************************************ 00:06:02.247 00:06:02.247 real 0m1.447s 00:06:02.247 user 0m1.251s 00:06:02.247 sys 0m0.107s 00:06:02.247 21:19:35 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.247 21:19:35 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 21:19:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.247 21:19:35 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:02.247 21:19:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.247 21:19:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.247 21:19:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 ************************************ 00:06:02.247 START TEST accel_crc32c_C2 00:06:02.247 ************************************ 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.247 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:02.247 [2024-07-15 21:19:35.410451] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:02.247 [2024-07-15 21:19:35.410747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:06:02.247 [2024-07-15 21:19:35.553052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.509 [2024-07-15 21:19:35.650622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.509 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.510 21:19:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.884 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.885 00:06:03.885 real 0m1.448s 00:06:03.885 user 0m1.258s 00:06:03.885 sys 0m0.103s 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.885 21:19:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:03.885 ************************************ 00:06:03.885 END TEST accel_crc32c_C2 00:06:03.885 ************************************ 00:06:03.885 21:19:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.885 21:19:36 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:03.885 21:19:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.885 21:19:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.885 21:19:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.885 ************************************ 00:06:03.885 START TEST accel_copy 00:06:03.885 ************************************ 00:06:03.885 21:19:36 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:03.885 21:19:36 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:03.885 [2024-07-15 21:19:36.926507] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:03.885 [2024-07-15 21:19:36.926576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61407 ] 00:06:03.885 [2024-07-15 21:19:37.063057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.885 [2024-07-15 21:19:37.161661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:03.885 21:19:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:05.260 21:19:38 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.260 00:06:05.260 real 0m1.445s 00:06:05.260 user 0m1.247s 00:06:05.260 sys 0m0.100s 00:06:05.260 21:19:38 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.260 ************************************ 00:06:05.260 END TEST accel_copy 00:06:05.260 ************************************ 00:06:05.260 21:19:38 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:05.260 21:19:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.260 21:19:38 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.260 21:19:38 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:05.260 21:19:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.260 21:19:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.260 ************************************ 00:06:05.260 START TEST accel_fill 00:06:05.260 ************************************ 00:06:05.260 21:19:38 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:05.260 21:19:38 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:05.260 [2024-07-15 21:19:38.430211] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:05.260 [2024-07-15 21:19:38.430296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61436 ] 00:06:05.260 [2024-07-15 21:19:38.579099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.538 [2024-07-15 21:19:38.674372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.538 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:05.539 21:19:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.477 21:19:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.735 21:19:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.735 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.735 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.735 21:19:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.735 21:19:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:06.736 21:19:39 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.736 00:06:06.736 real 0m1.443s 00:06:06.736 user 0m1.259s 00:06:06.736 sys 0m0.095s 00:06:06.736 21:19:39 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.736 ************************************ 00:06:06.736 END TEST accel_fill 00:06:06.736 ************************************ 00:06:06.736 21:19:39 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:06.736 21:19:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.736 21:19:39 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:06.736 21:19:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.736 21:19:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.736 21:19:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.736 ************************************ 00:06:06.736 START TEST accel_copy_crc32c 00:06:06.736 ************************************ 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:06.736 21:19:39 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:06.736 [2024-07-15 21:19:39.952528] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:06.736 [2024-07-15 21:19:39.952747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61471 ] 00:06:06.736 [2024-07-15 21:19:40.094181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.993 [2024-07-15 21:19:40.198477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.993 21:19:40 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.363 00:06:08.363 real 0m1.463s 00:06:08.363 user 0m1.259s 00:06:08.363 sys 0m0.115s 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.363 21:19:41 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:08.363 ************************************ 00:06:08.363 END TEST accel_copy_crc32c 00:06:08.363 ************************************ 00:06:08.363 21:19:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.363 21:19:41 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.363 21:19:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.363 21:19:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.363 21:19:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.363 ************************************ 00:06:08.363 START TEST accel_copy_crc32c_C2 00:06:08.363 ************************************ 00:06:08.363 21:19:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:08.363 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.363 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:08.363 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.363 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.363 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.364 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:08.364 [2024-07-15 21:19:41.478742] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:08.364 [2024-07-15 21:19:41.478863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61505 ] 00:06:08.364 [2024-07-15 21:19:41.621125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.364 [2024-07-15 21:19:41.724573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.620 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.621 21:19:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.554 00:06:09.554 real 0m1.461s 00:06:09.554 user 0m1.270s 00:06:09.554 sys 0m0.105s 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.554 21:19:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:09.554 ************************************ 00:06:09.554 END TEST accel_copy_crc32c_C2 00:06:09.554 ************************************ 00:06:09.812 21:19:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.812 21:19:42 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:09.812 21:19:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:09.812 21:19:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.812 21:19:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.812 ************************************ 00:06:09.812 START TEST accel_dualcast 00:06:09.812 ************************************ 00:06:09.812 21:19:42 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:09.812 21:19:42 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:09.812 [2024-07-15 21:19:43.004762] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:09.812 [2024-07-15 21:19:43.004884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61540 ] 00:06:09.812 [2024-07-15 21:19:43.143968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.075 [2024-07-15 21:19:43.238925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.075 21:19:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:11.469 21:19:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.469 00:06:11.469 real 0m1.445s 00:06:11.469 user 0m1.257s 00:06:11.469 sys 0m0.095s 00:06:11.469 21:19:44 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.469 21:19:44 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:11.469 ************************************ 00:06:11.469 END TEST accel_dualcast 00:06:11.469 ************************************ 00:06:11.469 21:19:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.469 21:19:44 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:11.469 21:19:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.469 21:19:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.469 21:19:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.469 ************************************ 00:06:11.469 START TEST accel_compare 00:06:11.469 ************************************ 00:06:11.469 21:19:44 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:11.469 21:19:44 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:11.469 [2024-07-15 21:19:44.525643] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:11.470 [2024-07-15 21:19:44.525893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61574 ] 00:06:11.470 [2024-07-15 21:19:44.667542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.470 [2024-07-15 21:19:44.762382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.470 21:19:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.844 ************************************ 00:06:12.844 END TEST accel_compare 00:06:12.844 ************************************ 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:12.844 21:19:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.844 00:06:12.844 real 0m1.449s 00:06:12.844 user 0m1.251s 00:06:12.844 sys 0m0.108s 00:06:12.844 21:19:45 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.844 21:19:45 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:12.844 21:19:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.844 21:19:45 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:12.844 21:19:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.844 21:19:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.844 21:19:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.844 ************************************ 00:06:12.844 START TEST accel_xor 00:06:12.844 ************************************ 00:06:12.844 21:19:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:12.844 21:19:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:12.844 [2024-07-15 21:19:46.036714] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:12.844 [2024-07-15 21:19:46.036806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61609 ] 00:06:12.844 [2024-07-15 21:19:46.178842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.101 [2024-07-15 21:19:46.271948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.101 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.102 21:19:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.473 00:06:14.473 real 0m1.445s 00:06:14.473 user 0m1.262s 00:06:14.473 sys 0m0.093s 00:06:14.473 ************************************ 00:06:14.473 END TEST accel_xor 00:06:14.473 ************************************ 00:06:14.473 21:19:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.473 21:19:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:14.473 21:19:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.473 21:19:47 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:14.473 21:19:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:14.473 21:19:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.473 21:19:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.473 ************************************ 00:06:14.473 START TEST accel_xor 00:06:14.473 ************************************ 00:06:14.473 21:19:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:14.473 21:19:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:14.473 [2024-07-15 21:19:47.558052] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:14.473 [2024-07-15 21:19:47.558287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61643 ] 00:06:14.474 [2024-07-15 21:19:47.700386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.474 [2024-07-15 21:19:47.791089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.474 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.732 21:19:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.664 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:15.665 21:19:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.665 ************************************ 00:06:15.665 END TEST accel_xor 00:06:15.665 ************************************ 00:06:15.665 00:06:15.665 real 0m1.447s 00:06:15.665 user 0m1.258s 00:06:15.665 sys 0m0.100s 00:06:15.665 21:19:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.665 21:19:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:15.665 21:19:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.665 21:19:49 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:15.665 21:19:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:15.665 21:19:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.665 21:19:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.923 ************************************ 00:06:15.923 START TEST accel_dif_verify 00:06:15.923 ************************************ 00:06:15.923 21:19:49 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:15.923 21:19:49 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:15.923 [2024-07-15 21:19:49.074238] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:15.923 [2024-07-15 21:19:49.074338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61678 ] 00:06:15.923 [2024-07-15 21:19:49.211024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.180 [2024-07-15 21:19:49.303557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.180 21:19:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:17.118 21:19:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.118 00:06:17.118 real 0m1.441s 00:06:17.118 user 0m1.252s 00:06:17.118 sys 0m0.102s 00:06:17.118 21:19:50 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.118 21:19:50 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:17.118 ************************************ 00:06:17.118 END TEST accel_dif_verify 00:06:17.118 ************************************ 00:06:17.377 21:19:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.377 21:19:50 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:17.377 21:19:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:17.377 21:19:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.377 21:19:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.377 ************************************ 00:06:17.377 START TEST accel_dif_generate 00:06:17.377 ************************************ 00:06:17.377 21:19:50 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:17.377 21:19:50 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:17.377 [2024-07-15 21:19:50.582487] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:17.377 [2024-07-15 21:19:50.582564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61709 ] 00:06:17.377 [2024-07-15 21:19:50.722931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.636 [2024-07-15 21:19:50.808015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.636 21:19:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:19.010 21:19:51 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.010 00:06:19.010 real 0m1.436s 00:06:19.010 user 0m1.253s 00:06:19.010 sys 0m0.095s 00:06:19.010 ************************************ 00:06:19.010 END TEST accel_dif_generate 00:06:19.010 ************************************ 00:06:19.010 21:19:51 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.010 21:19:51 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:19.010 21:19:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.010 21:19:52 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:19.010 21:19:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.010 21:19:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.010 21:19:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.010 ************************************ 00:06:19.010 START TEST accel_dif_generate_copy 00:06:19.010 ************************************ 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.010 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:19.011 [2024-07-15 21:19:52.087180] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:19.011 [2024-07-15 21:19:52.087259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61749 ] 00:06:19.011 [2024-07-15 21:19:52.226691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.011 [2024-07-15 21:19:52.304880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:19.011 21:19:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.386 00:06:20.386 real 0m1.427s 00:06:20.386 user 0m1.241s 00:06:20.386 sys 0m0.098s 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.386 ************************************ 00:06:20.386 END TEST accel_dif_generate_copy 00:06:20.386 ************************************ 00:06:20.386 21:19:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.387 21:19:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.387 21:19:53 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:20.387 21:19:53 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.387 21:19:53 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:20.387 21:19:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.387 21:19:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.387 ************************************ 00:06:20.387 START TEST accel_comp 00:06:20.387 ************************************ 00:06:20.387 21:19:53 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:20.387 21:19:53 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:20.387 [2024-07-15 21:19:53.580957] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:20.387 [2024-07-15 21:19:53.581035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61778 ] 00:06:20.387 [2024-07-15 21:19:53.722533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.644 [2024-07-15 21:19:53.803996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.644 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.644 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.644 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.644 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.644 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.644 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.644 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.645 21:19:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:21.620 21:19:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.620 00:06:21.620 real 0m1.436s 00:06:21.620 user 0m1.240s 00:06:21.620 sys 0m0.101s 00:06:21.620 ************************************ 00:06:21.620 END TEST accel_comp 00:06:21.620 ************************************ 00:06:21.620 21:19:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.620 21:19:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:21.879 21:19:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.879 21:19:55 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.879 21:19:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.879 21:19:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.879 21:19:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.879 ************************************ 00:06:21.879 START TEST accel_decomp 00:06:21.879 ************************************ 00:06:21.879 21:19:55 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:21.879 21:19:55 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:21.879 [2024-07-15 21:19:55.082864] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:21.879 [2024-07-15 21:19:55.082942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61812 ] 00:06:21.879 [2024-07-15 21:19:55.208696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.138 [2024-07-15 21:19:55.286500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.138 21:19:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.514 21:19:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.514 ************************************ 00:06:23.514 END TEST accel_decomp 00:06:23.514 ************************************ 00:06:23.514 00:06:23.514 real 0m1.417s 00:06:23.514 user 0m1.217s 00:06:23.514 sys 0m0.110s 00:06:23.514 21:19:56 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.514 21:19:56 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:23.514 21:19:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.515 21:19:56 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.515 21:19:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:23.515 21:19:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.515 21:19:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.515 ************************************ 00:06:23.515 START TEST accel_decomp_full 00:06:23.515 ************************************ 00:06:23.515 21:19:56 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:23.515 [2024-07-15 21:19:56.572430] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:23.515 [2024-07-15 21:19:56.572508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61848 ] 00:06:23.515 [2024-07-15 21:19:56.713754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.515 [2024-07-15 21:19:56.807545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:23.515 21:19:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.890 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.891 ************************************ 00:06:24.891 END TEST accel_decomp_full 00:06:24.891 ************************************ 00:06:24.891 21:19:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.891 00:06:24.891 real 0m1.453s 00:06:24.891 user 0m1.255s 00:06:24.891 sys 0m0.111s 00:06:24.891 21:19:57 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.891 21:19:57 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:24.891 21:19:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.891 21:19:58 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:24.891 21:19:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:24.891 21:19:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.891 21:19:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.891 ************************************ 00:06:24.891 START TEST accel_decomp_mcore 00:06:24.891 ************************************ 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:24.891 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:24.891 [2024-07-15 21:19:58.095922] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:24.891 [2024-07-15 21:19:58.096009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61881 ] 00:06:24.891 [2024-07-15 21:19:58.236707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.149 [2024-07-15 21:19:58.331776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.149 [2024-07-15 21:19:58.331958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.149 [2024-07-15 21:19:58.332152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.149 [2024-07-15 21:19:58.332153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.149 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.150 21:19:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.522 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.523 00:06:26.523 real 0m1.459s 00:06:26.523 user 0m0.025s 00:06:26.523 sys 0m0.005s 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.523 ************************************ 00:06:26.523 END TEST accel_decomp_mcore 00:06:26.523 ************************************ 00:06:26.523 21:19:59 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:26.523 21:19:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.523 21:19:59 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.523 21:19:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:26.523 21:19:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.523 21:19:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.523 ************************************ 00:06:26.523 START TEST accel_decomp_full_mcore 00:06:26.523 ************************************ 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:26.523 [2024-07-15 21:19:59.617435] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:26.523 [2024-07-15 21:19:59.617525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61920 ] 00:06:26.523 [2024-07-15 21:19:59.750728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.523 [2024-07-15 21:19:59.845411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.523 [2024-07-15 21:19:59.845608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.523 [2024-07-15 21:19:59.846418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.523 [2024-07-15 21:19:59.846421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.523 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.781 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.782 21:19:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.740 00:06:27.740 real 0m1.465s 00:06:27.740 user 0m4.601s 00:06:27.740 sys 0m0.122s 00:06:27.740 ************************************ 00:06:27.740 END TEST accel_decomp_full_mcore 00:06:27.740 ************************************ 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.740 21:20:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:27.740 21:20:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.740 21:20:01 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.740 21:20:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:27.740 21:20:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.740 21:20:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.999 ************************************ 00:06:27.999 START TEST accel_decomp_mthread 00:06:27.999 ************************************ 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:27.999 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:27.999 [2024-07-15 21:20:01.139528] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:27.999 [2024-07-15 21:20:01.139594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:06:27.999 [2024-07-15 21:20:01.281748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.258 [2024-07-15 21:20:01.373035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.258 21:20:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.192 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.450 00:06:29.450 real 0m1.452s 00:06:29.450 user 0m1.261s 00:06:29.450 sys 0m0.101s 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.450 ************************************ 00:06:29.450 END TEST accel_decomp_mthread 00:06:29.450 ************************************ 00:06:29.450 21:20:02 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:29.450 21:20:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.450 21:20:02 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.450 21:20:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:29.450 21:20:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.450 21:20:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.450 ************************************ 00:06:29.450 START TEST accel_decomp_full_mthread 00:06:29.450 ************************************ 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:29.450 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:29.450 [2024-07-15 21:20:02.673404] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:29.450 [2024-07-15 21:20:02.673624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61992 ] 00:06:29.450 [2024-07-15 21:20:02.815375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.718 [2024-07-15 21:20:02.913576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.718 21:20:02 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.108 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.108 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.108 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.108 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.108 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.108 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.109 00:06:31.109 real 0m1.499s 00:06:31.109 user 0m1.304s 00:06:31.109 sys 0m0.105s 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.109 21:20:04 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:31.109 ************************************ 00:06:31.109 END TEST accel_decomp_full_mthread 00:06:31.109 ************************************ 00:06:31.109 21:20:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.109 21:20:04 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:31.109 21:20:04 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:31.109 21:20:04 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:31.109 21:20:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.109 21:20:04 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.109 21:20:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.109 21:20:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.109 21:20:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.109 21:20:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.109 21:20:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.109 21:20:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.109 21:20:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:31.109 21:20:04 accel -- accel/accel.sh@41 -- # jq -r . 00:06:31.109 ************************************ 00:06:31.109 START TEST accel_dif_functional_tests 00:06:31.109 ************************************ 00:06:31.109 21:20:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:31.109 [2024-07-15 21:20:04.264628] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:31.109 [2024-07-15 21:20:04.264701] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62024 ] 00:06:31.109 [2024-07-15 21:20:04.406137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.367 [2024-07-15 21:20:04.503949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.367 [2024-07-15 21:20:04.504028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.367 [2024-07-15 21:20:04.504028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.367 [2024-07-15 21:20:04.547517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.367 00:06:31.367 00:06:31.367 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.367 http://cunit.sourceforge.net/ 00:06:31.367 00:06:31.367 00:06:31.367 Suite: accel_dif 00:06:31.367 Test: verify: DIF generated, GUARD check ...passed 00:06:31.367 Test: verify: DIF generated, APPTAG check ...passed 00:06:31.367 Test: verify: DIF generated, REFTAG check ...passed 00:06:31.367 Test: verify: DIF not generated, GUARD check ...passed 00:06:31.367 Test: verify: DIF not generated, APPTAG check ...passed 00:06:31.367 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 21:20:04.577590] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:31.367 [2024-07-15 21:20:04.577715] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:31.367 [2024-07-15 21:20:04.577767] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:31.367 passed 00:06:31.367 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:31.367 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:31.367 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:31.367 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:31.367 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:31.367 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 21:20:04.577925] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:31.367 [2024-07-15 21:20:04.578048] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:31.367 passed 00:06:31.367 Test: verify copy: DIF generated, GUARD check ...passed 00:06:31.367 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:31.368 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:31.368 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:31.368 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 21:20:04.578436] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:31.368 [2024-07-15 21:20:04.578469] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:31.368 passed 00:06:31.368 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:31.368 Test: generate copy: DIF generated, GUARD check ...passed 00:06:31.368 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:31.368 Test: generate copy: DIF generated, REFTAG check ...passed[2024-07-15 21:20:04.578551] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:31.368 00:06:31.368 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:31.368 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:31.368 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:31.368 Test: generate copy: iovecs-len validate ...passed 00:06:31.368 Test: generate copy: buffer alignment validate ...[2024-07-15 21:20:04.578943] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:31.368 passed 00:06:31.368 00:06:31.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.368 suites 1 1 n/a 0 0 00:06:31.368 tests 26 26 26 0 0 00:06:31.368 asserts 115 115 115 0 n/a 00:06:31.368 00:06:31.368 Elapsed time = 0.005 seconds 00:06:31.627 00:06:31.627 real 0m0.549s 00:06:31.627 user 0m0.675s 00:06:31.627 sys 0m0.143s 00:06:31.627 21:20:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.627 ************************************ 00:06:31.627 END TEST accel_dif_functional_tests 00:06:31.627 ************************************ 00:06:31.627 21:20:04 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:31.627 21:20:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.627 00:06:31.627 real 0m33.699s 00:06:31.627 user 0m35.083s 00:06:31.627 sys 0m3.979s 00:06:31.627 ************************************ 00:06:31.627 END TEST accel 00:06:31.627 ************************************ 00:06:31.627 21:20:04 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.627 21:20:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.627 21:20:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.627 21:20:04 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:31.627 21:20:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.627 21:20:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.627 21:20:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.627 ************************************ 00:06:31.627 START TEST accel_rpc 00:06:31.627 ************************************ 00:06:31.627 21:20:04 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:31.885 * Looking for test storage... 00:06:31.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:31.885 21:20:05 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.885 21:20:05 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:31.885 21:20:05 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62094 00:06:31.885 21:20:05 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62094 00:06:31.885 21:20:05 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62094 ']' 00:06:31.885 21:20:05 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.885 21:20:05 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.885 21:20:05 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.885 21:20:05 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.885 21:20:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.885 [2024-07-15 21:20:05.069550] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:31.885 [2024-07-15 21:20:05.069635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62094 ] 00:06:31.885 [2024-07-15 21:20:05.211253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.158 [2024-07-15 21:20:05.309493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.742 21:20:05 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.742 21:20:05 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.742 21:20:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:32.742 21:20:05 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:32.743 21:20:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:32.743 21:20:05 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:32.743 21:20:05 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:32.743 21:20:05 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.743 21:20:05 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.743 21:20:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.743 ************************************ 00:06:32.743 START TEST accel_assign_opcode 00:06:32.743 ************************************ 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.743 [2024-07-15 21:20:05.969031] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.743 [2024-07-15 21:20:05.981005] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.743 21:20:05 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:32.743 [2024-07-15 21:20:06.031322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.015 software 00:06:33.015 ************************************ 00:06:33.015 END TEST accel_assign_opcode 00:06:33.015 ************************************ 00:06:33.015 00:06:33.015 real 0m0.252s 00:06:33.015 user 0m0.055s 00:06:33.015 sys 0m0.010s 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.015 21:20:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:33.015 21:20:06 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62094 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62094 ']' 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62094 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62094 00:06:33.015 killing process with pid 62094 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62094' 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@967 -- # kill 62094 00:06:33.015 21:20:06 accel_rpc -- common/autotest_common.sh@972 -- # wait 62094 00:06:33.272 00:06:33.272 real 0m1.755s 00:06:33.272 user 0m1.802s 00:06:33.272 sys 0m0.441s 00:06:33.272 21:20:06 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.272 21:20:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.272 ************************************ 00:06:33.272 END TEST accel_rpc 00:06:33.272 ************************************ 00:06:33.532 21:20:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.532 21:20:06 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.532 21:20:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.532 21:20:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.532 21:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:33.532 ************************************ 00:06:33.532 START TEST app_cmdline 00:06:33.532 ************************************ 00:06:33.532 21:20:06 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.532 * Looking for test storage... 00:06:33.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:33.532 21:20:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:33.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.532 21:20:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62187 00:06:33.532 21:20:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:33.532 21:20:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62187 00:06:33.532 21:20:06 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62187 ']' 00:06:33.532 21:20:06 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.532 21:20:06 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.532 21:20:06 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.532 21:20:06 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.532 21:20:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.532 [2024-07-15 21:20:06.893895] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:33.532 [2024-07-15 21:20:06.893969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62187 ] 00:06:33.791 [2024-07-15 21:20:07.035345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.791 [2024-07-15 21:20:07.130663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.049 [2024-07-15 21:20:07.171507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.618 21:20:07 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.618 21:20:07 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:34.618 21:20:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:34.878 { 00:06:34.878 "version": "SPDK v24.09-pre git sha1 0663932f5", 00:06:34.878 "fields": { 00:06:34.878 "major": 24, 00:06:34.878 "minor": 9, 00:06:34.878 "patch": 0, 00:06:34.878 "suffix": "-pre", 00:06:34.878 "commit": "0663932f5" 00:06:34.878 } 00:06:34.878 } 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:34.878 21:20:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:34.878 21:20:08 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.138 request: 00:06:35.138 { 00:06:35.138 "method": "env_dpdk_get_mem_stats", 00:06:35.138 "req_id": 1 00:06:35.138 } 00:06:35.138 Got JSON-RPC error response 00:06:35.138 response: 00:06:35.138 { 00:06:35.138 "code": -32601, 00:06:35.138 "message": "Method not found" 00:06:35.138 } 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.138 21:20:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62187 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62187 ']' 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62187 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62187 00:06:35.138 killing process with pid 62187 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62187' 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@967 -- # kill 62187 00:06:35.138 21:20:08 app_cmdline -- common/autotest_common.sh@972 -- # wait 62187 00:06:35.396 ************************************ 00:06:35.396 END TEST app_cmdline 00:06:35.396 ************************************ 00:06:35.396 00:06:35.396 real 0m1.950s 00:06:35.396 user 0m2.362s 00:06:35.396 sys 0m0.448s 00:06:35.396 21:20:08 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.396 21:20:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.396 21:20:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.396 21:20:08 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.396 21:20:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.396 21:20:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.396 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:35.396 ************************************ 00:06:35.396 START TEST version 00:06:35.396 ************************************ 00:06:35.396 21:20:08 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.655 * Looking for test storage... 00:06:35.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.655 21:20:08 version -- app/version.sh@17 -- # get_header_version major 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # cut -f2 00:06:35.655 21:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.655 21:20:08 version -- app/version.sh@17 -- # major=24 00:06:35.655 21:20:08 version -- app/version.sh@18 -- # get_header_version minor 00:06:35.655 21:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # cut -f2 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.655 21:20:08 version -- app/version.sh@18 -- # minor=9 00:06:35.655 21:20:08 version -- app/version.sh@19 -- # get_header_version patch 00:06:35.655 21:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # cut -f2 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.655 21:20:08 version -- app/version.sh@19 -- # patch=0 00:06:35.655 21:20:08 version -- app/version.sh@20 -- # get_header_version suffix 00:06:35.655 21:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # cut -f2 00:06:35.655 21:20:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.655 21:20:08 version -- app/version.sh@20 -- # suffix=-pre 00:06:35.655 21:20:08 version -- app/version.sh@22 -- # version=24.9 00:06:35.655 21:20:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:35.655 21:20:08 version -- app/version.sh@28 -- # version=24.9rc0 00:06:35.655 21:20:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:35.655 21:20:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:35.655 21:20:08 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:35.655 21:20:08 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:35.655 00:06:35.655 real 0m0.229s 00:06:35.655 user 0m0.129s 00:06:35.655 sys 0m0.149s 00:06:35.655 ************************************ 00:06:35.655 END TEST version 00:06:35.655 ************************************ 00:06:35.655 21:20:08 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.655 21:20:08 version -- common/autotest_common.sh@10 -- # set +x 00:06:35.655 21:20:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.655 21:20:09 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:35.655 21:20:09 -- spdk/autotest.sh@198 -- # uname -s 00:06:35.655 21:20:09 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:35.655 21:20:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:35.655 21:20:09 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:35.655 21:20:09 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:35.655 21:20:09 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:35.655 21:20:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.655 21:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.655 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:35.913 ************************************ 00:06:35.913 START TEST spdk_dd 00:06:35.913 ************************************ 00:06:35.913 21:20:09 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:35.913 * Looking for test storage... 00:06:35.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:35.913 21:20:09 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.913 21:20:09 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.913 21:20:09 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.913 21:20:09 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.913 21:20:09 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.913 21:20:09 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.913 21:20:09 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.913 21:20:09 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:35.913 21:20:09 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.913 21:20:09 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:36.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.481 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:36.481 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:36.482 21:20:09 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:36.482 21:20:09 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:36.482 21:20:09 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:36.482 21:20:09 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:36.482 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:36.483 * spdk_dd linked to liburing 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:36.483 21:20:09 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:36.483 21:20:09 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:36.484 21:20:09 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:36.484 21:20:09 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:36.484 21:20:09 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:36.484 21:20:09 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:36.484 21:20:09 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:36.484 21:20:09 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:36.484 21:20:09 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:36.484 21:20:09 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:36.484 21:20:09 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:36.484 21:20:09 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.484 21:20:09 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:36.484 ************************************ 00:06:36.484 START TEST spdk_dd_basic_rw 00:06:36.484 ************************************ 00:06:36.484 21:20:09 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:36.743 * Looking for test storage... 00:06:36.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:36.743 21:20:09 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 3 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.004 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.005 ************************************ 00:06:37.005 START TEST dd_bs_lt_native_bs 00:06:37.005 ************************************ 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.005 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:37.005 { 00:06:37.005 "subsystems": [ 00:06:37.005 { 00:06:37.005 "subsystem": "bdev", 00:06:37.005 "config": [ 00:06:37.005 { 00:06:37.005 "params": { 00:06:37.005 "trtype": "pcie", 00:06:37.005 "traddr": "0000:00:10.0", 00:06:37.005 "name": "Nvme0" 00:06:37.005 }, 00:06:37.005 "method": "bdev_nvme_attach_controller" 00:06:37.005 }, 00:06:37.005 { 00:06:37.005 "method": "bdev_wait_for_examine" 00:06:37.005 } 00:06:37.005 ] 00:06:37.005 } 00:06:37.005 ] 00:06:37.005 } 00:06:37.005 [2024-07-15 21:20:10.259765] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:37.005 [2024-07-15 21:20:10.259847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62508 ] 00:06:37.264 [2024-07-15 21:20:10.402134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.264 [2024-07-15 21:20:10.497956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.264 [2024-07-15 21:20:10.539671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.523 [2024-07-15 21:20:10.638004] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:37.523 [2024-07-15 21:20:10.638057] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.523 [2024-07-15 21:20:10.736178] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.523 00:06:37.523 real 0m0.614s 00:06:37.523 user 0m0.416s 00:06:37.523 sys 0m0.155s 00:06:37.523 ************************************ 00:06:37.523 END TEST dd_bs_lt_native_bs 00:06:37.523 ************************************ 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.523 ************************************ 00:06:37.523 START TEST dd_rw 00:06:37.523 ************************************ 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:37.523 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:37.782 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:37.782 21:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.040 21:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:38.040 21:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:38.040 21:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.299 21:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.299 [2024-07-15 21:20:11.457497] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:38.299 [2024-07-15 21:20:11.457574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62539 ] 00:06:38.299 { 00:06:38.299 "subsystems": [ 00:06:38.299 { 00:06:38.299 "subsystem": "bdev", 00:06:38.299 "config": [ 00:06:38.299 { 00:06:38.299 "params": { 00:06:38.299 "trtype": "pcie", 00:06:38.299 "traddr": "0000:00:10.0", 00:06:38.299 "name": "Nvme0" 00:06:38.299 }, 00:06:38.299 "method": "bdev_nvme_attach_controller" 00:06:38.299 }, 00:06:38.299 { 00:06:38.299 "method": "bdev_wait_for_examine" 00:06:38.299 } 00:06:38.299 ] 00:06:38.299 } 00:06:38.299 ] 00:06:38.299 } 00:06:38.299 [2024-07-15 21:20:11.598451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.557 [2024-07-15 21:20:11.683372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.557 [2024-07-15 21:20:11.724693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.817  Copying: 60/60 [kB] (average 19 MBps) 00:06:38.817 00:06:38.817 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:38.817 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:38.817 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.817 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.817 [2024-07-15 21:20:12.042734] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:38.817 [2024-07-15 21:20:12.042797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62558 ] 00:06:38.817 { 00:06:38.817 "subsystems": [ 00:06:38.817 { 00:06:38.817 "subsystem": "bdev", 00:06:38.817 "config": [ 00:06:38.817 { 00:06:38.817 "params": { 00:06:38.817 "trtype": "pcie", 00:06:38.817 "traddr": "0000:00:10.0", 00:06:38.817 "name": "Nvme0" 00:06:38.817 }, 00:06:38.817 "method": "bdev_nvme_attach_controller" 00:06:38.817 }, 00:06:38.817 { 00:06:38.817 "method": "bdev_wait_for_examine" 00:06:38.817 } 00:06:38.817 ] 00:06:38.817 } 00:06:38.817 ] 00:06:38.817 } 00:06:38.817 [2024-07-15 21:20:12.174586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.076 [2024-07-15 21:20:12.261919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.076 [2024-07-15 21:20:12.303458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.333  Copying: 60/60 [kB] (average 19 MBps) 00:06:39.333 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:39.333 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.334 21:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.334 [2024-07-15 21:20:12.648780] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:39.334 [2024-07-15 21:20:12.648868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62574 ] 00:06:39.334 { 00:06:39.334 "subsystems": [ 00:06:39.334 { 00:06:39.334 "subsystem": "bdev", 00:06:39.334 "config": [ 00:06:39.334 { 00:06:39.334 "params": { 00:06:39.334 "trtype": "pcie", 00:06:39.334 "traddr": "0000:00:10.0", 00:06:39.334 "name": "Nvme0" 00:06:39.334 }, 00:06:39.334 "method": "bdev_nvme_attach_controller" 00:06:39.334 }, 00:06:39.334 { 00:06:39.334 "method": "bdev_wait_for_examine" 00:06:39.334 } 00:06:39.334 ] 00:06:39.334 } 00:06:39.334 ] 00:06:39.334 } 00:06:39.592 [2024-07-15 21:20:12.785214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.592 [2024-07-15 21:20:12.896697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.592 [2024-07-15 21:20:12.937609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.850  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:39.850 00:06:39.850 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:39.850 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:39.850 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:39.850 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:39.850 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:39.850 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:39.850 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.416 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:40.416 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.416 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.416 21:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.707 [2024-07-15 21:20:13.825354] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:40.707 [2024-07-15 21:20:13.825428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62593 ] 00:06:40.707 { 00:06:40.707 "subsystems": [ 00:06:40.707 { 00:06:40.707 "subsystem": "bdev", 00:06:40.707 "config": [ 00:06:40.707 { 00:06:40.707 "params": { 00:06:40.707 "trtype": "pcie", 00:06:40.707 "traddr": "0000:00:10.0", 00:06:40.707 "name": "Nvme0" 00:06:40.707 }, 00:06:40.707 "method": "bdev_nvme_attach_controller" 00:06:40.707 }, 00:06:40.708 { 00:06:40.708 "method": "bdev_wait_for_examine" 00:06:40.708 } 00:06:40.708 ] 00:06:40.708 } 00:06:40.708 ] 00:06:40.708 } 00:06:40.708 [2024-07-15 21:20:13.955544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.708 [2024-07-15 21:20:14.050670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.965 [2024-07-15 21:20:14.091675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.223  Copying: 60/60 [kB] (average 58 MBps) 00:06:41.223 00:06:41.223 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:41.223 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:41.223 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.223 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.223 [2024-07-15 21:20:14.420047] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:41.223 [2024-07-15 21:20:14.420113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62606 ] 00:06:41.223 { 00:06:41.223 "subsystems": [ 00:06:41.223 { 00:06:41.224 "subsystem": "bdev", 00:06:41.224 "config": [ 00:06:41.224 { 00:06:41.224 "params": { 00:06:41.224 "trtype": "pcie", 00:06:41.224 "traddr": "0000:00:10.0", 00:06:41.224 "name": "Nvme0" 00:06:41.224 }, 00:06:41.224 "method": "bdev_nvme_attach_controller" 00:06:41.224 }, 00:06:41.224 { 00:06:41.224 "method": "bdev_wait_for_examine" 00:06:41.224 } 00:06:41.224 ] 00:06:41.224 } 00:06:41.224 ] 00:06:41.224 } 00:06:41.224 [2024-07-15 21:20:14.560528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.480 [2024-07-15 21:20:14.653247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.480 [2024-07-15 21:20:14.694545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.737  Copying: 60/60 [kB] (average 58 MBps) 00:06:41.737 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.737 21:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.737 { 00:06:41.737 "subsystems": [ 00:06:41.737 { 00:06:41.737 "subsystem": "bdev", 00:06:41.737 "config": [ 00:06:41.737 { 00:06:41.737 "params": { 00:06:41.737 "trtype": "pcie", 00:06:41.737 "traddr": "0000:00:10.0", 00:06:41.737 "name": "Nvme0" 00:06:41.737 }, 00:06:41.737 "method": "bdev_nvme_attach_controller" 00:06:41.737 }, 00:06:41.737 { 00:06:41.737 "method": "bdev_wait_for_examine" 00:06:41.737 } 00:06:41.737 ] 00:06:41.737 } 00:06:41.737 ] 00:06:41.737 } 00:06:41.737 [2024-07-15 21:20:15.033287] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:41.737 [2024-07-15 21:20:15.033349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62622 ] 00:06:41.995 [2024-07-15 21:20:15.174111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.995 [2024-07-15 21:20:15.258300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.995 [2024-07-15 21:20:15.298939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.253  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:42.253 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:42.253 21:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.820 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:42.820 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:42.820 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.820 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.820 [2024-07-15 21:20:16.088320] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:42.820 [2024-07-15 21:20:16.088386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62641 ] 00:06:42.820 { 00:06:42.820 "subsystems": [ 00:06:42.820 { 00:06:42.820 "subsystem": "bdev", 00:06:42.820 "config": [ 00:06:42.820 { 00:06:42.820 "params": { 00:06:42.820 "trtype": "pcie", 00:06:42.820 "traddr": "0000:00:10.0", 00:06:42.820 "name": "Nvme0" 00:06:42.820 }, 00:06:42.820 "method": "bdev_nvme_attach_controller" 00:06:42.820 }, 00:06:42.820 { 00:06:42.820 "method": "bdev_wait_for_examine" 00:06:42.820 } 00:06:42.820 ] 00:06:42.820 } 00:06:42.820 ] 00:06:42.820 } 00:06:43.078 [2024-07-15 21:20:16.228696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.078 [2024-07-15 21:20:16.310136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.078 [2024-07-15 21:20:16.350711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.336  Copying: 56/56 [kB] (average 54 MBps) 00:06:43.336 00:06:43.336 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:43.336 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:43.336 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.336 21:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.336 [2024-07-15 21:20:16.679298] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:43.336 [2024-07-15 21:20:16.679366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62654 ] 00:06:43.336 { 00:06:43.336 "subsystems": [ 00:06:43.336 { 00:06:43.336 "subsystem": "bdev", 00:06:43.336 "config": [ 00:06:43.336 { 00:06:43.336 "params": { 00:06:43.336 "trtype": "pcie", 00:06:43.336 "traddr": "0000:00:10.0", 00:06:43.336 "name": "Nvme0" 00:06:43.336 }, 00:06:43.336 "method": "bdev_nvme_attach_controller" 00:06:43.337 }, 00:06:43.337 { 00:06:43.337 "method": "bdev_wait_for_examine" 00:06:43.337 } 00:06:43.337 ] 00:06:43.337 } 00:06:43.337 ] 00:06:43.337 } 00:06:43.595 [2024-07-15 21:20:16.819075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.595 [2024-07-15 21:20:16.908632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.595 [2024-07-15 21:20:16.949478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.112  Copying: 56/56 [kB] (average 27 MBps) 00:06:44.112 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.112 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 [2024-07-15 21:20:17.286336] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:44.112 [2024-07-15 21:20:17.286399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62675 ] 00:06:44.112 { 00:06:44.112 "subsystems": [ 00:06:44.112 { 00:06:44.112 "subsystem": "bdev", 00:06:44.112 "config": [ 00:06:44.112 { 00:06:44.112 "params": { 00:06:44.112 "trtype": "pcie", 00:06:44.112 "traddr": "0000:00:10.0", 00:06:44.112 "name": "Nvme0" 00:06:44.112 }, 00:06:44.112 "method": "bdev_nvme_attach_controller" 00:06:44.112 }, 00:06:44.112 { 00:06:44.112 "method": "bdev_wait_for_examine" 00:06:44.112 } 00:06:44.112 ] 00:06:44.112 } 00:06:44.112 ] 00:06:44.112 } 00:06:44.112 [2024-07-15 21:20:17.426216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.370 [2024-07-15 21:20:17.516486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.370 [2024-07-15 21:20:17.557148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.629  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:44.629 00:06:44.629 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:44.629 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:44.629 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:44.629 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:44.629 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:44.629 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:44.629 21:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.197 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:45.197 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:45.197 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.197 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.197 [2024-07-15 21:20:18.352415] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:45.197 [2024-07-15 21:20:18.352484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62694 ] 00:06:45.197 { 00:06:45.197 "subsystems": [ 00:06:45.197 { 00:06:45.197 "subsystem": "bdev", 00:06:45.197 "config": [ 00:06:45.197 { 00:06:45.197 "params": { 00:06:45.197 "trtype": "pcie", 00:06:45.197 "traddr": "0000:00:10.0", 00:06:45.197 "name": "Nvme0" 00:06:45.197 }, 00:06:45.197 "method": "bdev_nvme_attach_controller" 00:06:45.197 }, 00:06:45.197 { 00:06:45.197 "method": "bdev_wait_for_examine" 00:06:45.197 } 00:06:45.197 ] 00:06:45.197 } 00:06:45.197 ] 00:06:45.197 } 00:06:45.197 [2024-07-15 21:20:18.492477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.455 [2024-07-15 21:20:18.582907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.455 [2024-07-15 21:20:18.623593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.712  Copying: 56/56 [kB] (average 54 MBps) 00:06:45.713 00:06:45.713 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:45.713 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:45.713 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.713 21:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.713 [2024-07-15 21:20:18.949644] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:45.713 [2024-07-15 21:20:18.949709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62708 ] 00:06:45.713 { 00:06:45.713 "subsystems": [ 00:06:45.713 { 00:06:45.713 "subsystem": "bdev", 00:06:45.713 "config": [ 00:06:45.713 { 00:06:45.713 "params": { 00:06:45.713 "trtype": "pcie", 00:06:45.713 "traddr": "0000:00:10.0", 00:06:45.713 "name": "Nvme0" 00:06:45.713 }, 00:06:45.713 "method": "bdev_nvme_attach_controller" 00:06:45.713 }, 00:06:45.713 { 00:06:45.713 "method": "bdev_wait_for_examine" 00:06:45.713 } 00:06:45.713 ] 00:06:45.713 } 00:06:45.713 ] 00:06:45.713 } 00:06:45.971 [2024-07-15 21:20:19.090399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.971 [2024-07-15 21:20:19.177520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.971 [2024-07-15 21:20:19.218525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.229  Copying: 56/56 [kB] (average 54 MBps) 00:06:46.229 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.229 21:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.229 { 00:06:46.229 "subsystems": [ 00:06:46.229 { 00:06:46.229 "subsystem": "bdev", 00:06:46.229 "config": [ 00:06:46.229 { 00:06:46.229 "params": { 00:06:46.229 "trtype": "pcie", 00:06:46.229 "traddr": "0000:00:10.0", 00:06:46.229 "name": "Nvme0" 00:06:46.229 }, 00:06:46.229 "method": "bdev_nvme_attach_controller" 00:06:46.230 }, 00:06:46.230 { 00:06:46.230 "method": "bdev_wait_for_examine" 00:06:46.230 } 00:06:46.230 ] 00:06:46.230 } 00:06:46.230 ] 00:06:46.230 } 00:06:46.230 [2024-07-15 21:20:19.554296] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:46.230 [2024-07-15 21:20:19.554730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62723 ] 00:06:46.489 [2024-07-15 21:20:19.694288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.489 [2024-07-15 21:20:19.790014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.489 [2024-07-15 21:20:19.830836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.748  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:46.748 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:46.748 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.315 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:47.315 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:47.315 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.315 21:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.315 [2024-07-15 21:20:20.564874] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:47.315 [2024-07-15 21:20:20.564934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62742 ] 00:06:47.315 { 00:06:47.315 "subsystems": [ 00:06:47.315 { 00:06:47.315 "subsystem": "bdev", 00:06:47.315 "config": [ 00:06:47.315 { 00:06:47.315 "params": { 00:06:47.315 "trtype": "pcie", 00:06:47.315 "traddr": "0000:00:10.0", 00:06:47.315 "name": "Nvme0" 00:06:47.315 }, 00:06:47.315 "method": "bdev_nvme_attach_controller" 00:06:47.315 }, 00:06:47.315 { 00:06:47.315 "method": "bdev_wait_for_examine" 00:06:47.315 } 00:06:47.315 ] 00:06:47.315 } 00:06:47.315 ] 00:06:47.315 } 00:06:47.576 [2024-07-15 21:20:20.704891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.576 [2024-07-15 21:20:20.803324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.576 [2024-07-15 21:20:20.846655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.843  Copying: 48/48 [kB] (average 46 MBps) 00:06:47.843 00:06:47.843 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:47.843 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:47.843 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.843 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.843 [2024-07-15 21:20:21.195341] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:47.843 [2024-07-15 21:20:21.195415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62756 ] 00:06:47.843 { 00:06:47.843 "subsystems": [ 00:06:47.843 { 00:06:47.843 "subsystem": "bdev", 00:06:47.843 "config": [ 00:06:47.843 { 00:06:47.843 "params": { 00:06:47.843 "trtype": "pcie", 00:06:47.843 "traddr": "0000:00:10.0", 00:06:47.843 "name": "Nvme0" 00:06:47.843 }, 00:06:47.843 "method": "bdev_nvme_attach_controller" 00:06:47.843 }, 00:06:47.843 { 00:06:47.843 "method": "bdev_wait_for_examine" 00:06:47.843 } 00:06:47.843 ] 00:06:47.843 } 00:06:47.843 ] 00:06:47.843 } 00:06:48.101 [2024-07-15 21:20:21.336984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.101 [2024-07-15 21:20:21.434472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.358 [2024-07-15 21:20:21.477577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.616  Copying: 48/48 [kB] (average 46 MBps) 00:06:48.616 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.616 21:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.616 { 00:06:48.616 "subsystems": [ 00:06:48.616 { 00:06:48.616 "subsystem": "bdev", 00:06:48.616 "config": [ 00:06:48.616 { 00:06:48.616 "params": { 00:06:48.616 "trtype": "pcie", 00:06:48.616 "traddr": "0000:00:10.0", 00:06:48.616 "name": "Nvme0" 00:06:48.616 }, 00:06:48.616 "method": "bdev_nvme_attach_controller" 00:06:48.616 }, 00:06:48.616 { 00:06:48.616 "method": "bdev_wait_for_examine" 00:06:48.616 } 00:06:48.616 ] 00:06:48.616 } 00:06:48.616 ] 00:06:48.616 } 00:06:48.616 [2024-07-15 21:20:21.831509] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:48.616 [2024-07-15 21:20:21.831574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:06:48.616 [2024-07-15 21:20:21.972564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.873 [2024-07-15 21:20:22.076180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.873 [2024-07-15 21:20:22.120649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.129  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:49.129 00:06:49.129 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.129 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:49.129 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:49.129 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:49.129 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:49.129 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:49.130 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.695 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:49.695 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:49.695 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.695 21:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.695 [2024-07-15 21:20:22.900110] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:49.695 [2024-07-15 21:20:22.900175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62792 ] 00:06:49.695 { 00:06:49.695 "subsystems": [ 00:06:49.695 { 00:06:49.695 "subsystem": "bdev", 00:06:49.695 "config": [ 00:06:49.695 { 00:06:49.695 "params": { 00:06:49.695 "trtype": "pcie", 00:06:49.695 "traddr": "0000:00:10.0", 00:06:49.695 "name": "Nvme0" 00:06:49.695 }, 00:06:49.695 "method": "bdev_nvme_attach_controller" 00:06:49.695 }, 00:06:49.695 { 00:06:49.695 "method": "bdev_wait_for_examine" 00:06:49.695 } 00:06:49.695 ] 00:06:49.695 } 00:06:49.695 ] 00:06:49.695 } 00:06:49.696 [2024-07-15 21:20:23.040525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.952 [2024-07-15 21:20:23.127452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.952 [2024-07-15 21:20:23.168847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.210  Copying: 48/48 [kB] (average 46 MBps) 00:06:50.210 00:06:50.210 21:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:50.210 21:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:50.210 21:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.210 21:20:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.210 [2024-07-15 21:20:23.498232] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:50.210 [2024-07-15 21:20:23.498299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62804 ] 00:06:50.210 { 00:06:50.210 "subsystems": [ 00:06:50.210 { 00:06:50.210 "subsystem": "bdev", 00:06:50.210 "config": [ 00:06:50.210 { 00:06:50.210 "params": { 00:06:50.210 "trtype": "pcie", 00:06:50.210 "traddr": "0000:00:10.0", 00:06:50.210 "name": "Nvme0" 00:06:50.210 }, 00:06:50.210 "method": "bdev_nvme_attach_controller" 00:06:50.210 }, 00:06:50.210 { 00:06:50.210 "method": "bdev_wait_for_examine" 00:06:50.210 } 00:06:50.210 ] 00:06:50.210 } 00:06:50.210 ] 00:06:50.210 } 00:06:50.467 [2024-07-15 21:20:23.639247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.467 [2024-07-15 21:20:23.729761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.467 [2024-07-15 21:20:23.771118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.723  Copying: 48/48 [kB] (average 46 MBps) 00:06:50.723 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.723 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.981 { 00:06:50.981 "subsystems": [ 00:06:50.981 { 00:06:50.981 "subsystem": "bdev", 00:06:50.981 "config": [ 00:06:50.981 { 00:06:50.981 "params": { 00:06:50.981 "trtype": "pcie", 00:06:50.981 "traddr": "0000:00:10.0", 00:06:50.981 "name": "Nvme0" 00:06:50.981 }, 00:06:50.981 "method": "bdev_nvme_attach_controller" 00:06:50.981 }, 00:06:50.981 { 00:06:50.981 "method": "bdev_wait_for_examine" 00:06:50.981 } 00:06:50.981 ] 00:06:50.981 } 00:06:50.981 ] 00:06:50.981 } 00:06:50.981 [2024-07-15 21:20:24.116976] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:50.981 [2024-07-15 21:20:24.117072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62825 ] 00:06:50.981 [2024-07-15 21:20:24.264772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.240 [2024-07-15 21:20:24.362361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.240 [2024-07-15 21:20:24.403809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.500  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:51.500 00:06:51.500 00:06:51.500 real 0m13.800s 00:06:51.500 user 0m9.951s 00:06:51.500 sys 0m4.861s 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.500 ************************************ 00:06:51.500 END TEST dd_rw 00:06:51.500 ************************************ 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.500 ************************************ 00:06:51.500 START TEST dd_rw_offset 00:06:51.500 ************************************ 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=a9b0s0jjhy31qfnz74ftpy6jo8r4bruz4pi5ewoqndc8ce73e6h6ynq7uvyue56w4lyqmm5smh5as9kpsbirtjb7j9f4gqc99q6uyjus1gre1adw76fny533y7jnevwuceq301o7w7pk8jl177gp57fyatffb0k7ivdzn58c1pcvvkwnv9dvzyxuiy5ijz8o6gvyvfw2v09ye7ffk0r04ommkssai1xgzm6p4ynzjc5cjvep5z6yql9yvygxh4dcw6q77ra26or8b7m3o3ynzcrpxi7wk7lqaai7d8bqkb91biolu8vyoeyy84hy9j75i1dmf16bydz8fi3fon42rcst3i87222hppq09y7ex30kgzi6ix4kn916rua4stveekumururahuoow9c0j5tbhcz1o12jj8kwonjekdngj8ts9dlp5lwyjxc3mldhmda21p09kmwr07apodt6owxtc08wtpf6nbfwrwznubttqogdkd1xq0u5l3rx40psvpqn0d868hian6csma6f6sxmajkuv751miaatrt47puoecdtlz1o2i7nwo7f27woefedhpp33j668a1tfau2nnkdcgm6w3tc10wp3ihxnz6ytqlolgtqbme3znlh06g4cjp7y2hygvw84gup3dsagfz9nv3g1bnjb3vfy97bteilxung3rpb2voda4tqogn2lkdubk5yx8ikayis5aybmljnjtnrdzi72o63lb0q2nxfkoubnparnmkxd03rrrxyqvrifljawg0qesfe6llgn3n9lp5ivd1xanckzfkxolwncqueojpgk2tbncc43yclmyhj87o3lm00sqkl1bfzx9unjvdkb3dav9w4qna4zq8cvdwmflk9hi9ol3sf9jgi3w71sixmp0nx9mmkst8ipmbes511maru6bjffjf1nzfs3vfelecrhq6ai1pwzzij0q6tsa8cryax9u1mzlq61zale1f99mpebyfb5hlnf06s7hvg06gpv4gh41f2ko25wie43fnubtpjor5m1ulpc43ms1ac03985vaqt5jqv4nburjxzd615kuo7wyuf8umxvybh5at6neb6vf0vdyj9c4rkxgy8wuhpr61bkku9q1u14xorgsehh8fp1vjdixvv7p7563ea8vazjsf3akt2jsm8kvb8w6459w2uelwcfd2gcia7ittwrplopsnl8nt0cwu2m8d4kjnmuh52wyc9uuz6gigzhjpudyp39lq80yghj1wpnz78l8lb733fa7buy88b8y3pwmsh00sdjycjru8i3zmdsf4okc77okvm7e2bhxcqp6c39vpomml9jngymrl11q9mw38a0gblm54bkv7s4i85byajh9qmarrcxyf3yr4qxaoj42rm9sa2w9tox7ddpxviad0m6z830xal8eg8xjlpv6cqckwwz27tazc4n6hckz0ajgjo3wvv9v8o24k5tpy25r4zq4o7rz0ugfnoj0mrjh46r45ek7q2aqa8gimge2xesniriw1933lt0g08su90iuab6txlv4l27egnds62toftfkrwgv107xiguyrj3h3efi9b2tmq5n3w503r9hb5vnv6jyjc2opk4o2jtwjfxl9anf8vr2jui4jn7au2ehljwj8kb53fzc9tvlpehzb5paw5d9cqw7am4p1wsost8appwmmtpbhi27gtsvgfbqt8l1fj488a55q7n5l3nr59snrsp5fc1m3yag5z1ia74qkvfl1675ojvmpsiq6gwmw3t3vacmgebrk25p0ro5f53av6gxosn9ifsex10ocw22urdbofee9zfhdcov0farx3qai6f2ydc2duusoq8ou96yb3fbol1jx1hm0ny3zvv6oti2s4bbnd1l5uv7teyufppcgiplq5b12e3qxvui5acz5jrz919qwuhn9z3a3y2rs8i8a0poz514hfeerkvsz6146aazx9xm8rs5vqwqbik2bvjjqbvdzk692ys1ytff79jt6b1rcux1fohkegac3mraqh7sxst8mpql6r1l5646v7rjrrytnj8babwypqm0twrg5u0r2ffqe0p34lqpeeal95gptd74vzgphdoxeozddopl5icigzxd1yzkzeld0glwfg1ptd27qn6id9c3imrmm9m1ec8dto21tkpglkane8inwoc3ijh4kv0ulnvu0mg0r1eib15qha0w0cbhz2k3neqq4iaeveaco5r16jbyotk9ryrbwq211zjgykv7c39oxf2fqa69w48tssaaf92aqv2two94jvrm5xtqylucngeom61z2nt65mrslcemmiym9o5xxdfiamrgu82jk81vr296fb44v6fe9b09pq18212ujp6w6ym0bpjrx3xbaj38oyigdal1k7fgrkvlkfspnx5h8dkmg1kx2p09gahwd5v5ptp0pnm6pej9lm60q71hx3mu76cxxcosw2se8ry6r3s5d429xc78kkqeay0iprzsi82ekcvo3gg7k2fqvazyovlekpk91y2w5u6vyv1wqfdzo365f3nyduved63gd3ctvpwlqp1vw782pmhtgm6vyfony8dr7zxc0ld6fe3eif3a75xzdls3nj7pzrpmz2lkir5ccndz9lwxqmcjwq8tmjzio3z1dtsp00q82tsllldd3t8nh85myru0mm1d6rzd3sv8xwze08roeevpr7m42xyzn426r3ddc7ub3z0g1f3d0ae4znjcmy6vid3c2pm5itvmdlv16u62f4t12tbdgc8o5pwo5enx8ohswd2wmtpda2qo5cf0jrlyd7uj2r7jwdew64ithpsscy9mxehojxxiaduliaujr7oh4wis2u1l6l86nznt2gs3cs1y4kks606rjxldw1d761aq4zfmu4dk9xv3ijuhb5djxsj7zfwvv1uvr5zb3rwcnnxmvoxaeu6cvhxsnagwj5ecfds0xiwbjd56iyirjmlb217pvsynd75plpf10s8acpmfve5q06p5apob6v8jyil2y2v1tru16su4nnaq1027dbzk5g6h7ntskbvxpe5laqk6bw068usbal1xeam1ckkfyjevb306g37kt87wx3idzp7vsrm1a7irurpp0mr4cqlfop5dz2o5wtf8qyxtve45my4ba823zmcklzjyrcbi3x7hjk4kzmunv6w7cfm6qtsjr7e4av6glew5rurh0e30fq4pigwbsvwv2hhyeima5foczhs0qz7zmhj486m5g951sqixcii2fb4xywsci0cazdr5q2s44l70gnlefd6zlwcx57ufmbnx8nwkwudsydq5rd5tf6eciuudkwjo2jcsn4vrlnco7mpntlla1hi3tch5xysar8wsgfwh6e02vrzrixzm9kj077b7dnt098539lrjx1lilkh410fjltwhikb3jlx5ath4qvxwp6yy1kjf1ukxbp6g3qb2zak6pxo3of64tjvbf70sg2e47m6i51hx6te1e908eqtlaa3115h65a7ma5t5dpi298077nmrdt6plhwqx91cxsd47gl6uvnl4ay3vm7ddfv1iewcagyoa3xl4zj088i16cx17xkmk6pa8r0wu1bgz92bq34mb1nss2mdxq0fnhrewpxe2drosgy49cw3owzi4vn4dv0p1fqvevftku8ejdh8ugbtsy9sdh450l0fr8mnoujy990h93xta5vwxxdvrdrpmlkf5wnm82r25ayst7cmegkrih2z1tkki8pewyqwkfgronjtnixl7x0dihs691b2ni1ieqssbaks1jnr2uegq3luukwg9j0q4qlsose6lu7def77hgjtfn012i1mf5frjsciusos3l18wa0r2947kf11imto85q29yw4i5zkepgaaklkc2bi2oxnjyp092natpc5rxg99xqs9w011djgd2q3nyngyow35sg9x8qvcf8wn4cpldjslfuayaulu0dfllvxlg4hzxxhhlcssqt7z5kxwzfpf704n4m21qe3jm1x056ck14nxce202xrqdkwli6jis2930rwoykbnsr5xwowzjs4nk4whahpq9lt 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:51.500 21:20:24 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:51.759 [2024-07-15 21:20:24.870209] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:51.760 [2024-07-15 21:20:24.870281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62855 ] 00:06:51.760 { 00:06:51.760 "subsystems": [ 00:06:51.760 { 00:06:51.760 "subsystem": "bdev", 00:06:51.760 "config": [ 00:06:51.760 { 00:06:51.760 "params": { 00:06:51.760 "trtype": "pcie", 00:06:51.760 "traddr": "0000:00:10.0", 00:06:51.760 "name": "Nvme0" 00:06:51.760 }, 00:06:51.760 "method": "bdev_nvme_attach_controller" 00:06:51.760 }, 00:06:51.760 { 00:06:51.760 "method": "bdev_wait_for_examine" 00:06:51.760 } 00:06:51.760 ] 00:06:51.760 } 00:06:51.760 ] 00:06:51.760 } 00:06:51.760 [2024-07-15 21:20:25.010197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.760 [2024-07-15 21:20:25.105790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.019 [2024-07-15 21:20:25.147774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.279  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:52.279 00:06:52.279 21:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:52.279 21:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:52.279 21:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:52.279 21:20:25 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:52.279 [2024-07-15 21:20:25.484272] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:52.279 [2024-07-15 21:20:25.484500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62869 ] 00:06:52.279 { 00:06:52.279 "subsystems": [ 00:06:52.279 { 00:06:52.279 "subsystem": "bdev", 00:06:52.279 "config": [ 00:06:52.279 { 00:06:52.279 "params": { 00:06:52.279 "trtype": "pcie", 00:06:52.279 "traddr": "0000:00:10.0", 00:06:52.279 "name": "Nvme0" 00:06:52.279 }, 00:06:52.279 "method": "bdev_nvme_attach_controller" 00:06:52.279 }, 00:06:52.279 { 00:06:52.279 "method": "bdev_wait_for_examine" 00:06:52.279 } 00:06:52.279 ] 00:06:52.279 } 00:06:52.279 ] 00:06:52.279 } 00:06:52.279 [2024-07-15 21:20:25.624553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.538 [2024-07-15 21:20:25.721630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.538 [2024-07-15 21:20:25.762991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.798  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:52.798 00:06:52.798 21:20:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:52.798 ************************************ 00:06:52.798 END TEST dd_rw_offset 00:06:52.798 ************************************ 00:06:52.798 21:20:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ a9b0s0jjhy31qfnz74ftpy6jo8r4bruz4pi5ewoqndc8ce73e6h6ynq7uvyue56w4lyqmm5smh5as9kpsbirtjb7j9f4gqc99q6uyjus1gre1adw76fny533y7jnevwuceq301o7w7pk8jl177gp57fyatffb0k7ivdzn58c1pcvvkwnv9dvzyxuiy5ijz8o6gvyvfw2v09ye7ffk0r04ommkssai1xgzm6p4ynzjc5cjvep5z6yql9yvygxh4dcw6q77ra26or8b7m3o3ynzcrpxi7wk7lqaai7d8bqkb91biolu8vyoeyy84hy9j75i1dmf16bydz8fi3fon42rcst3i87222hppq09y7ex30kgzi6ix4kn916rua4stveekumururahuoow9c0j5tbhcz1o12jj8kwonjekdngj8ts9dlp5lwyjxc3mldhmda21p09kmwr07apodt6owxtc08wtpf6nbfwrwznubttqogdkd1xq0u5l3rx40psvpqn0d868hian6csma6f6sxmajkuv751miaatrt47puoecdtlz1o2i7nwo7f27woefedhpp33j668a1tfau2nnkdcgm6w3tc10wp3ihxnz6ytqlolgtqbme3znlh06g4cjp7y2hygvw84gup3dsagfz9nv3g1bnjb3vfy97bteilxung3rpb2voda4tqogn2lkdubk5yx8ikayis5aybmljnjtnrdzi72o63lb0q2nxfkoubnparnmkxd03rrrxyqvrifljawg0qesfe6llgn3n9lp5ivd1xanckzfkxolwncqueojpgk2tbncc43yclmyhj87o3lm00sqkl1bfzx9unjvdkb3dav9w4qna4zq8cvdwmflk9hi9ol3sf9jgi3w71sixmp0nx9mmkst8ipmbes511maru6bjffjf1nzfs3vfelecrhq6ai1pwzzij0q6tsa8cryax9u1mzlq61zale1f99mpebyfb5hlnf06s7hvg06gpv4gh41f2ko25wie43fnubtpjor5m1ulpc43ms1ac03985vaqt5jqv4nburjxzd615kuo7wyuf8umxvybh5at6neb6vf0vdyj9c4rkxgy8wuhpr61bkku9q1u14xorgsehh8fp1vjdixvv7p7563ea8vazjsf3akt2jsm8kvb8w6459w2uelwcfd2gcia7ittwrplopsnl8nt0cwu2m8d4kjnmuh52wyc9uuz6gigzhjpudyp39lq80yghj1wpnz78l8lb733fa7buy88b8y3pwmsh00sdjycjru8i3zmdsf4okc77okvm7e2bhxcqp6c39vpomml9jngymrl11q9mw38a0gblm54bkv7s4i85byajh9qmarrcxyf3yr4qxaoj42rm9sa2w9tox7ddpxviad0m6z830xal8eg8xjlpv6cqckwwz27tazc4n6hckz0ajgjo3wvv9v8o24k5tpy25r4zq4o7rz0ugfnoj0mrjh46r45ek7q2aqa8gimge2xesniriw1933lt0g08su90iuab6txlv4l27egnds62toftfkrwgv107xiguyrj3h3efi9b2tmq5n3w503r9hb5vnv6jyjc2opk4o2jtwjfxl9anf8vr2jui4jn7au2ehljwj8kb53fzc9tvlpehzb5paw5d9cqw7am4p1wsost8appwmmtpbhi27gtsvgfbqt8l1fj488a55q7n5l3nr59snrsp5fc1m3yag5z1ia74qkvfl1675ojvmpsiq6gwmw3t3vacmgebrk25p0ro5f53av6gxosn9ifsex10ocw22urdbofee9zfhdcov0farx3qai6f2ydc2duusoq8ou96yb3fbol1jx1hm0ny3zvv6oti2s4bbnd1l5uv7teyufppcgiplq5b12e3qxvui5acz5jrz919qwuhn9z3a3y2rs8i8a0poz514hfeerkvsz6146aazx9xm8rs5vqwqbik2bvjjqbvdzk692ys1ytff79jt6b1rcux1fohkegac3mraqh7sxst8mpql6r1l5646v7rjrrytnj8babwypqm0twrg5u0r2ffqe0p34lqpeeal95gptd74vzgphdoxeozddopl5icigzxd1yzkzeld0glwfg1ptd27qn6id9c3imrmm9m1ec8dto21tkpglkane8inwoc3ijh4kv0ulnvu0mg0r1eib15qha0w0cbhz2k3neqq4iaeveaco5r16jbyotk9ryrbwq211zjgykv7c39oxf2fqa69w48tssaaf92aqv2two94jvrm5xtqylucngeom61z2nt65mrslcemmiym9o5xxdfiamrgu82jk81vr296fb44v6fe9b09pq18212ujp6w6ym0bpjrx3xbaj38oyigdal1k7fgrkvlkfspnx5h8dkmg1kx2p09gahwd5v5ptp0pnm6pej9lm60q71hx3mu76cxxcosw2se8ry6r3s5d429xc78kkqeay0iprzsi82ekcvo3gg7k2fqvazyovlekpk91y2w5u6vyv1wqfdzo365f3nyduved63gd3ctvpwlqp1vw782pmhtgm6vyfony8dr7zxc0ld6fe3eif3a75xzdls3nj7pzrpmz2lkir5ccndz9lwxqmcjwq8tmjzio3z1dtsp00q82tsllldd3t8nh85myru0mm1d6rzd3sv8xwze08roeevpr7m42xyzn426r3ddc7ub3z0g1f3d0ae4znjcmy6vid3c2pm5itvmdlv16u62f4t12tbdgc8o5pwo5enx8ohswd2wmtpda2qo5cf0jrlyd7uj2r7jwdew64ithpsscy9mxehojxxiaduliaujr7oh4wis2u1l6l86nznt2gs3cs1y4kks606rjxldw1d761aq4zfmu4dk9xv3ijuhb5djxsj7zfwvv1uvr5zb3rwcnnxmvoxaeu6cvhxsnagwj5ecfds0xiwbjd56iyirjmlb217pvsynd75plpf10s8acpmfve5q06p5apob6v8jyil2y2v1tru16su4nnaq1027dbzk5g6h7ntskbvxpe5laqk6bw068usbal1xeam1ckkfyjevb306g37kt87wx3idzp7vsrm1a7irurpp0mr4cqlfop5dz2o5wtf8qyxtve45my4ba823zmcklzjyrcbi3x7hjk4kzmunv6w7cfm6qtsjr7e4av6glew5rurh0e30fq4pigwbsvwv2hhyeima5foczhs0qz7zmhj486m5g951sqixcii2fb4xywsci0cazdr5q2s44l70gnlefd6zlwcx57ufmbnx8nwkwudsydq5rd5tf6eciuudkwjo2jcsn4vrlnco7mpntlla1hi3tch5xysar8wsgfwh6e02vrzrixzm9kj077b7dnt098539lrjx1lilkh410fjltwhikb3jlx5ath4qvxwp6yy1kjf1ukxbp6g3qb2zak6pxo3of64tjvbf70sg2e47m6i51hx6te1e908eqtlaa3115h65a7ma5t5dpi298077nmrdt6plhwqx91cxsd47gl6uvnl4ay3vm7ddfv1iewcagyoa3xl4zj088i16cx17xkmk6pa8r0wu1bgz92bq34mb1nss2mdxq0fnhrewpxe2drosgy49cw3owzi4vn4dv0p1fqvevftku8ejdh8ugbtsy9sdh450l0fr8mnoujy990h93xta5vwxxdvrdrpmlkf5wnm82r25ayst7cmegkrih2z1tkki8pewyqwkfgronjtnixl7x0dihs691b2ni1ieqssbaks1jnr2uegq3luukwg9j0q4qlsose6lu7def77hgjtfn012i1mf5frjsciusos3l18wa0r2947kf11imto85q29yw4i5zkepgaaklkc2bi2oxnjyp092natpc5rxg99xqs9w011djgd2q3nyngyow35sg9x8qvcf8wn4cpldjslfuayaulu0dfllvxlg4hzxxhhlcssqt7z5kxwzfpf704n4m21qe3jm1x056ck14nxce202xrqdkwli6jis2930rwoykbnsr5xwowzjs4nk4whahpq9lt == \a\9\b\0\s\0\j\j\h\y\3\1\q\f\n\z\7\4\f\t\p\y\6\j\o\8\r\4\b\r\u\z\4\p\i\5\e\w\o\q\n\d\c\8\c\e\7\3\e\6\h\6\y\n\q\7\u\v\y\u\e\5\6\w\4\l\y\q\m\m\5\s\m\h\5\a\s\9\k\p\s\b\i\r\t\j\b\7\j\9\f\4\g\q\c\9\9\q\6\u\y\j\u\s\1\g\r\e\1\a\d\w\7\6\f\n\y\5\3\3\y\7\j\n\e\v\w\u\c\e\q\3\0\1\o\7\w\7\p\k\8\j\l\1\7\7\g\p\5\7\f\y\a\t\f\f\b\0\k\7\i\v\d\z\n\5\8\c\1\p\c\v\v\k\w\n\v\9\d\v\z\y\x\u\i\y\5\i\j\z\8\o\6\g\v\y\v\f\w\2\v\0\9\y\e\7\f\f\k\0\r\0\4\o\m\m\k\s\s\a\i\1\x\g\z\m\6\p\4\y\n\z\j\c\5\c\j\v\e\p\5\z\6\y\q\l\9\y\v\y\g\x\h\4\d\c\w\6\q\7\7\r\a\2\6\o\r\8\b\7\m\3\o\3\y\n\z\c\r\p\x\i\7\w\k\7\l\q\a\a\i\7\d\8\b\q\k\b\9\1\b\i\o\l\u\8\v\y\o\e\y\y\8\4\h\y\9\j\7\5\i\1\d\m\f\1\6\b\y\d\z\8\f\i\3\f\o\n\4\2\r\c\s\t\3\i\8\7\2\2\2\h\p\p\q\0\9\y\7\e\x\3\0\k\g\z\i\6\i\x\4\k\n\9\1\6\r\u\a\4\s\t\v\e\e\k\u\m\u\r\u\r\a\h\u\o\o\w\9\c\0\j\5\t\b\h\c\z\1\o\1\2\j\j\8\k\w\o\n\j\e\k\d\n\g\j\8\t\s\9\d\l\p\5\l\w\y\j\x\c\3\m\l\d\h\m\d\a\2\1\p\0\9\k\m\w\r\0\7\a\p\o\d\t\6\o\w\x\t\c\0\8\w\t\p\f\6\n\b\f\w\r\w\z\n\u\b\t\t\q\o\g\d\k\d\1\x\q\0\u\5\l\3\r\x\4\0\p\s\v\p\q\n\0\d\8\6\8\h\i\a\n\6\c\s\m\a\6\f\6\s\x\m\a\j\k\u\v\7\5\1\m\i\a\a\t\r\t\4\7\p\u\o\e\c\d\t\l\z\1\o\2\i\7\n\w\o\7\f\2\7\w\o\e\f\e\d\h\p\p\3\3\j\6\6\8\a\1\t\f\a\u\2\n\n\k\d\c\g\m\6\w\3\t\c\1\0\w\p\3\i\h\x\n\z\6\y\t\q\l\o\l\g\t\q\b\m\e\3\z\n\l\h\0\6\g\4\c\j\p\7\y\2\h\y\g\v\w\8\4\g\u\p\3\d\s\a\g\f\z\9\n\v\3\g\1\b\n\j\b\3\v\f\y\9\7\b\t\e\i\l\x\u\n\g\3\r\p\b\2\v\o\d\a\4\t\q\o\g\n\2\l\k\d\u\b\k\5\y\x\8\i\k\a\y\i\s\5\a\y\b\m\l\j\n\j\t\n\r\d\z\i\7\2\o\6\3\l\b\0\q\2\n\x\f\k\o\u\b\n\p\a\r\n\m\k\x\d\0\3\r\r\r\x\y\q\v\r\i\f\l\j\a\w\g\0\q\e\s\f\e\6\l\l\g\n\3\n\9\l\p\5\i\v\d\1\x\a\n\c\k\z\f\k\x\o\l\w\n\c\q\u\e\o\j\p\g\k\2\t\b\n\c\c\4\3\y\c\l\m\y\h\j\8\7\o\3\l\m\0\0\s\q\k\l\1\b\f\z\x\9\u\n\j\v\d\k\b\3\d\a\v\9\w\4\q\n\a\4\z\q\8\c\v\d\w\m\f\l\k\9\h\i\9\o\l\3\s\f\9\j\g\i\3\w\7\1\s\i\x\m\p\0\n\x\9\m\m\k\s\t\8\i\p\m\b\e\s\5\1\1\m\a\r\u\6\b\j\f\f\j\f\1\n\z\f\s\3\v\f\e\l\e\c\r\h\q\6\a\i\1\p\w\z\z\i\j\0\q\6\t\s\a\8\c\r\y\a\x\9\u\1\m\z\l\q\6\1\z\a\l\e\1\f\9\9\m\p\e\b\y\f\b\5\h\l\n\f\0\6\s\7\h\v\g\0\6\g\p\v\4\g\h\4\1\f\2\k\o\2\5\w\i\e\4\3\f\n\u\b\t\p\j\o\r\5\m\1\u\l\p\c\4\3\m\s\1\a\c\0\3\9\8\5\v\a\q\t\5\j\q\v\4\n\b\u\r\j\x\z\d\6\1\5\k\u\o\7\w\y\u\f\8\u\m\x\v\y\b\h\5\a\t\6\n\e\b\6\v\f\0\v\d\y\j\9\c\4\r\k\x\g\y\8\w\u\h\p\r\6\1\b\k\k\u\9\q\1\u\1\4\x\o\r\g\s\e\h\h\8\f\p\1\v\j\d\i\x\v\v\7\p\7\5\6\3\e\a\8\v\a\z\j\s\f\3\a\k\t\2\j\s\m\8\k\v\b\8\w\6\4\5\9\w\2\u\e\l\w\c\f\d\2\g\c\i\a\7\i\t\t\w\r\p\l\o\p\s\n\l\8\n\t\0\c\w\u\2\m\8\d\4\k\j\n\m\u\h\5\2\w\y\c\9\u\u\z\6\g\i\g\z\h\j\p\u\d\y\p\3\9\l\q\8\0\y\g\h\j\1\w\p\n\z\7\8\l\8\l\b\7\3\3\f\a\7\b\u\y\8\8\b\8\y\3\p\w\m\s\h\0\0\s\d\j\y\c\j\r\u\8\i\3\z\m\d\s\f\4\o\k\c\7\7\o\k\v\m\7\e\2\b\h\x\c\q\p\6\c\3\9\v\p\o\m\m\l\9\j\n\g\y\m\r\l\1\1\q\9\m\w\3\8\a\0\g\b\l\m\5\4\b\k\v\7\s\4\i\8\5\b\y\a\j\h\9\q\m\a\r\r\c\x\y\f\3\y\r\4\q\x\a\o\j\4\2\r\m\9\s\a\2\w\9\t\o\x\7\d\d\p\x\v\i\a\d\0\m\6\z\8\3\0\x\a\l\8\e\g\8\x\j\l\p\v\6\c\q\c\k\w\w\z\2\7\t\a\z\c\4\n\6\h\c\k\z\0\a\j\g\j\o\3\w\v\v\9\v\8\o\2\4\k\5\t\p\y\2\5\r\4\z\q\4\o\7\r\z\0\u\g\f\n\o\j\0\m\r\j\h\4\6\r\4\5\e\k\7\q\2\a\q\a\8\g\i\m\g\e\2\x\e\s\n\i\r\i\w\1\9\3\3\l\t\0\g\0\8\s\u\9\0\i\u\a\b\6\t\x\l\v\4\l\2\7\e\g\n\d\s\6\2\t\o\f\t\f\k\r\w\g\v\1\0\7\x\i\g\u\y\r\j\3\h\3\e\f\i\9\b\2\t\m\q\5\n\3\w\5\0\3\r\9\h\b\5\v\n\v\6\j\y\j\c\2\o\p\k\4\o\2\j\t\w\j\f\x\l\9\a\n\f\8\v\r\2\j\u\i\4\j\n\7\a\u\2\e\h\l\j\w\j\8\k\b\5\3\f\z\c\9\t\v\l\p\e\h\z\b\5\p\a\w\5\d\9\c\q\w\7\a\m\4\p\1\w\s\o\s\t\8\a\p\p\w\m\m\t\p\b\h\i\2\7\g\t\s\v\g\f\b\q\t\8\l\1\f\j\4\8\8\a\5\5\q\7\n\5\l\3\n\r\5\9\s\n\r\s\p\5\f\c\1\m\3\y\a\g\5\z\1\i\a\7\4\q\k\v\f\l\1\6\7\5\o\j\v\m\p\s\i\q\6\g\w\m\w\3\t\3\v\a\c\m\g\e\b\r\k\2\5\p\0\r\o\5\f\5\3\a\v\6\g\x\o\s\n\9\i\f\s\e\x\1\0\o\c\w\2\2\u\r\d\b\o\f\e\e\9\z\f\h\d\c\o\v\0\f\a\r\x\3\q\a\i\6\f\2\y\d\c\2\d\u\u\s\o\q\8\o\u\9\6\y\b\3\f\b\o\l\1\j\x\1\h\m\0\n\y\3\z\v\v\6\o\t\i\2\s\4\b\b\n\d\1\l\5\u\v\7\t\e\y\u\f\p\p\c\g\i\p\l\q\5\b\1\2\e\3\q\x\v\u\i\5\a\c\z\5\j\r\z\9\1\9\q\w\u\h\n\9\z\3\a\3\y\2\r\s\8\i\8\a\0\p\o\z\5\1\4\h\f\e\e\r\k\v\s\z\6\1\4\6\a\a\z\x\9\x\m\8\r\s\5\v\q\w\q\b\i\k\2\b\v\j\j\q\b\v\d\z\k\6\9\2\y\s\1\y\t\f\f\7\9\j\t\6\b\1\r\c\u\x\1\f\o\h\k\e\g\a\c\3\m\r\a\q\h\7\s\x\s\t\8\m\p\q\l\6\r\1\l\5\6\4\6\v\7\r\j\r\r\y\t\n\j\8\b\a\b\w\y\p\q\m\0\t\w\r\g\5\u\0\r\2\f\f\q\e\0\p\3\4\l\q\p\e\e\a\l\9\5\g\p\t\d\7\4\v\z\g\p\h\d\o\x\e\o\z\d\d\o\p\l\5\i\c\i\g\z\x\d\1\y\z\k\z\e\l\d\0\g\l\w\f\g\1\p\t\d\2\7\q\n\6\i\d\9\c\3\i\m\r\m\m\9\m\1\e\c\8\d\t\o\2\1\t\k\p\g\l\k\a\n\e\8\i\n\w\o\c\3\i\j\h\4\k\v\0\u\l\n\v\u\0\m\g\0\r\1\e\i\b\1\5\q\h\a\0\w\0\c\b\h\z\2\k\3\n\e\q\q\4\i\a\e\v\e\a\c\o\5\r\1\6\j\b\y\o\t\k\9\r\y\r\b\w\q\2\1\1\z\j\g\y\k\v\7\c\3\9\o\x\f\2\f\q\a\6\9\w\4\8\t\s\s\a\a\f\9\2\a\q\v\2\t\w\o\9\4\j\v\r\m\5\x\t\q\y\l\u\c\n\g\e\o\m\6\1\z\2\n\t\6\5\m\r\s\l\c\e\m\m\i\y\m\9\o\5\x\x\d\f\i\a\m\r\g\u\8\2\j\k\8\1\v\r\2\9\6\f\b\4\4\v\6\f\e\9\b\0\9\p\q\1\8\2\1\2\u\j\p\6\w\6\y\m\0\b\p\j\r\x\3\x\b\a\j\3\8\o\y\i\g\d\a\l\1\k\7\f\g\r\k\v\l\k\f\s\p\n\x\5\h\8\d\k\m\g\1\k\x\2\p\0\9\g\a\h\w\d\5\v\5\p\t\p\0\p\n\m\6\p\e\j\9\l\m\6\0\q\7\1\h\x\3\m\u\7\6\c\x\x\c\o\s\w\2\s\e\8\r\y\6\r\3\s\5\d\4\2\9\x\c\7\8\k\k\q\e\a\y\0\i\p\r\z\s\i\8\2\e\k\c\v\o\3\g\g\7\k\2\f\q\v\a\z\y\o\v\l\e\k\p\k\9\1\y\2\w\5\u\6\v\y\v\1\w\q\f\d\z\o\3\6\5\f\3\n\y\d\u\v\e\d\6\3\g\d\3\c\t\v\p\w\l\q\p\1\v\w\7\8\2\p\m\h\t\g\m\6\v\y\f\o\n\y\8\d\r\7\z\x\c\0\l\d\6\f\e\3\e\i\f\3\a\7\5\x\z\d\l\s\3\n\j\7\p\z\r\p\m\z\2\l\k\i\r\5\c\c\n\d\z\9\l\w\x\q\m\c\j\w\q\8\t\m\j\z\i\o\3\z\1\d\t\s\p\0\0\q\8\2\t\s\l\l\l\d\d\3\t\8\n\h\8\5\m\y\r\u\0\m\m\1\d\6\r\z\d\3\s\v\8\x\w\z\e\0\8\r\o\e\e\v\p\r\7\m\4\2\x\y\z\n\4\2\6\r\3\d\d\c\7\u\b\3\z\0\g\1\f\3\d\0\a\e\4\z\n\j\c\m\y\6\v\i\d\3\c\2\p\m\5\i\t\v\m\d\l\v\1\6\u\6\2\f\4\t\1\2\t\b\d\g\c\8\o\5\p\w\o\5\e\n\x\8\o\h\s\w\d\2\w\m\t\p\d\a\2\q\o\5\c\f\0\j\r\l\y\d\7\u\j\2\r\7\j\w\d\e\w\6\4\i\t\h\p\s\s\c\y\9\m\x\e\h\o\j\x\x\i\a\d\u\l\i\a\u\j\r\7\o\h\4\w\i\s\2\u\1\l\6\l\8\6\n\z\n\t\2\g\s\3\c\s\1\y\4\k\k\s\6\0\6\r\j\x\l\d\w\1\d\7\6\1\a\q\4\z\f\m\u\4\d\k\9\x\v\3\i\j\u\h\b\5\d\j\x\s\j\7\z\f\w\v\v\1\u\v\r\5\z\b\3\r\w\c\n\n\x\m\v\o\x\a\e\u\6\c\v\h\x\s\n\a\g\w\j\5\e\c\f\d\s\0\x\i\w\b\j\d\5\6\i\y\i\r\j\m\l\b\2\1\7\p\v\s\y\n\d\7\5\p\l\p\f\1\0\s\8\a\c\p\m\f\v\e\5\q\0\6\p\5\a\p\o\b\6\v\8\j\y\i\l\2\y\2\v\1\t\r\u\1\6\s\u\4\n\n\a\q\1\0\2\7\d\b\z\k\5\g\6\h\7\n\t\s\k\b\v\x\p\e\5\l\a\q\k\6\b\w\0\6\8\u\s\b\a\l\1\x\e\a\m\1\c\k\k\f\y\j\e\v\b\3\0\6\g\3\7\k\t\8\7\w\x\3\i\d\z\p\7\v\s\r\m\1\a\7\i\r\u\r\p\p\0\m\r\4\c\q\l\f\o\p\5\d\z\2\o\5\w\t\f\8\q\y\x\t\v\e\4\5\m\y\4\b\a\8\2\3\z\m\c\k\l\z\j\y\r\c\b\i\3\x\7\h\j\k\4\k\z\m\u\n\v\6\w\7\c\f\m\6\q\t\s\j\r\7\e\4\a\v\6\g\l\e\w\5\r\u\r\h\0\e\3\0\f\q\4\p\i\g\w\b\s\v\w\v\2\h\h\y\e\i\m\a\5\f\o\c\z\h\s\0\q\z\7\z\m\h\j\4\8\6\m\5\g\9\5\1\s\q\i\x\c\i\i\2\f\b\4\x\y\w\s\c\i\0\c\a\z\d\r\5\q\2\s\4\4\l\7\0\g\n\l\e\f\d\6\z\l\w\c\x\5\7\u\f\m\b\n\x\8\n\w\k\w\u\d\s\y\d\q\5\r\d\5\t\f\6\e\c\i\u\u\d\k\w\j\o\2\j\c\s\n\4\v\r\l\n\c\o\7\m\p\n\t\l\l\a\1\h\i\3\t\c\h\5\x\y\s\a\r\8\w\s\g\f\w\h\6\e\0\2\v\r\z\r\i\x\z\m\9\k\j\0\7\7\b\7\d\n\t\0\9\8\5\3\9\l\r\j\x\1\l\i\l\k\h\4\1\0\f\j\l\t\w\h\i\k\b\3\j\l\x\5\a\t\h\4\q\v\x\w\p\6\y\y\1\k\j\f\1\u\k\x\b\p\6\g\3\q\b\2\z\a\k\6\p\x\o\3\o\f\6\4\t\j\v\b\f\7\0\s\g\2\e\4\7\m\6\i\5\1\h\x\6\t\e\1\e\9\0\8\e\q\t\l\a\a\3\1\1\5\h\6\5\a\7\m\a\5\t\5\d\p\i\2\9\8\0\7\7\n\m\r\d\t\6\p\l\h\w\q\x\9\1\c\x\s\d\4\7\g\l\6\u\v\n\l\4\a\y\3\v\m\7\d\d\f\v\1\i\e\w\c\a\g\y\o\a\3\x\l\4\z\j\0\8\8\i\1\6\c\x\1\7\x\k\m\k\6\p\a\8\r\0\w\u\1\b\g\z\9\2\b\q\3\4\m\b\1\n\s\s\2\m\d\x\q\0\f\n\h\r\e\w\p\x\e\2\d\r\o\s\g\y\4\9\c\w\3\o\w\z\i\4\v\n\4\d\v\0\p\1\f\q\v\e\v\f\t\k\u\8\e\j\d\h\8\u\g\b\t\s\y\9\s\d\h\4\5\0\l\0\f\r\8\m\n\o\u\j\y\9\9\0\h\9\3\x\t\a\5\v\w\x\x\d\v\r\d\r\p\m\l\k\f\5\w\n\m\8\2\r\2\5\a\y\s\t\7\c\m\e\g\k\r\i\h\2\z\1\t\k\k\i\8\p\e\w\y\q\w\k\f\g\r\o\n\j\t\n\i\x\l\7\x\0\d\i\h\s\6\9\1\b\2\n\i\1\i\e\q\s\s\b\a\k\s\1\j\n\r\2\u\e\g\q\3\l\u\u\k\w\g\9\j\0\q\4\q\l\s\o\s\e\6\l\u\7\d\e\f\7\7\h\g\j\t\f\n\0\1\2\i\1\m\f\5\f\r\j\s\c\i\u\s\o\s\3\l\1\8\w\a\0\r\2\9\4\7\k\f\1\1\i\m\t\o\8\5\q\2\9\y\w\4\i\5\z\k\e\p\g\a\a\k\l\k\c\2\b\i\2\o\x\n\j\y\p\0\9\2\n\a\t\p\c\5\r\x\g\9\9\x\q\s\9\w\0\1\1\d\j\g\d\2\q\3\n\y\n\g\y\o\w\3\5\s\g\9\x\8\q\v\c\f\8\w\n\4\c\p\l\d\j\s\l\f\u\a\y\a\u\l\u\0\d\f\l\l\v\x\l\g\4\h\z\x\x\h\h\l\c\s\s\q\t\7\z\5\k\x\w\z\f\p\f\7\0\4\n\4\m\2\1\q\e\3\j\m\1\x\0\5\6\c\k\1\4\n\x\c\e\2\0\2\x\r\q\d\k\w\l\i\6\j\i\s\2\9\3\0\r\w\o\y\k\b\n\s\r\5\x\w\o\w\z\j\s\4\n\k\4\w\h\a\h\p\q\9\l\t ]] 00:06:52.798 00:06:52.798 real 0m1.288s 00:06:52.798 user 0m0.885s 00:06:52.798 sys 0m0.528s 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:52.799 21:20:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.799 [2024-07-15 21:20:26.160828] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:52.799 [2024-07-15 21:20:26.160900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62904 ] 00:06:52.799 { 00:06:52.799 "subsystems": [ 00:06:52.799 { 00:06:52.799 "subsystem": "bdev", 00:06:52.799 "config": [ 00:06:52.799 { 00:06:52.799 "params": { 00:06:52.799 "trtype": "pcie", 00:06:52.799 "traddr": "0000:00:10.0", 00:06:52.799 "name": "Nvme0" 00:06:52.799 }, 00:06:52.799 "method": "bdev_nvme_attach_controller" 00:06:52.799 }, 00:06:52.799 { 00:06:52.799 "method": "bdev_wait_for_examine" 00:06:52.799 } 00:06:52.799 ] 00:06:52.799 } 00:06:52.799 ] 00:06:52.799 } 00:06:53.060 [2024-07-15 21:20:26.301929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.060 [2024-07-15 21:20:26.404556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.336 [2024-07-15 21:20:26.445931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.595  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:53.595 00:06:53.595 21:20:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.595 ************************************ 00:06:53.595 END TEST spdk_dd_basic_rw 00:06:53.595 ************************************ 00:06:53.595 00:06:53.595 real 0m16.899s 00:06:53.595 user 0m11.887s 00:06:53.595 sys 0m6.077s 00:06:53.595 21:20:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.595 21:20:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.595 21:20:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:53.595 21:20:26 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:53.595 21:20:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.595 21:20:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.595 21:20:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:53.595 ************************************ 00:06:53.595 START TEST spdk_dd_posix 00:06:53.595 ************************************ 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:53.595 * Looking for test storage... 00:06:53.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:53.595 * First test run, liburing in use 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:53.595 ************************************ 00:06:53.595 START TEST dd_flag_append 00:06:53.595 ************************************ 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:53.595 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=sprekqy39a3qwn3dicf6r5ehomvjyg38 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=zv6q3ynfqzz92udsjutvqm3875zxzqp1 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s sprekqy39a3qwn3dicf6r5ehomvjyg38 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s zv6q3ynfqzz92udsjutvqm3875zxzqp1 00:06:53.854 21:20:26 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:53.854 [2024-07-15 21:20:27.023235] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:53.854 [2024-07-15 21:20:27.023431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62967 ] 00:06:53.854 [2024-07-15 21:20:27.163061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.114 [2024-07-15 21:20:27.255997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.114 [2024-07-15 21:20:27.296761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.373  Copying: 32/32 [B] (average 31 kBps) 00:06:54.373 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ zv6q3ynfqzz92udsjutvqm3875zxzqp1sprekqy39a3qwn3dicf6r5ehomvjyg38 == \z\v\6\q\3\y\n\f\q\z\z\9\2\u\d\s\j\u\t\v\q\m\3\8\7\5\z\x\z\q\p\1\s\p\r\e\k\q\y\3\9\a\3\q\w\n\3\d\i\c\f\6\r\5\e\h\o\m\v\j\y\g\3\8 ]] 00:06:54.373 00:06:54.373 real 0m0.538s 00:06:54.373 user 0m0.299s 00:06:54.373 sys 0m0.231s 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.373 ************************************ 00:06:54.373 END TEST dd_flag_append 00:06:54.373 ************************************ 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:54.373 ************************************ 00:06:54.373 START TEST dd_flag_directory 00:06:54.373 ************************************ 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.373 21:20:27 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.373 [2024-07-15 21:20:27.624548] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:54.373 [2024-07-15 21:20:27.624630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62991 ] 00:06:54.632 [2024-07-15 21:20:27.765253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.632 [2024-07-15 21:20:27.865704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.632 [2024-07-15 21:20:27.908246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.632 [2024-07-15 21:20:27.937548] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:54.632 [2024-07-15 21:20:27.937607] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:54.632 [2024-07-15 21:20:27.937624] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.891 [2024-07-15 21:20:28.031436] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.891 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:54.891 [2024-07-15 21:20:28.181595] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:54.891 [2024-07-15 21:20:28.181689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63006 ] 00:06:55.149 [2024-07-15 21:20:28.328426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.149 [2024-07-15 21:20:28.430602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.149 [2024-07-15 21:20:28.472513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.149 [2024-07-15 21:20:28.501370] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.149 [2024-07-15 21:20:28.501418] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.149 [2024-07-15 21:20:28.501431] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.409 [2024-07-15 21:20:28.592752] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.409 ************************************ 00:06:55.409 END TEST dd_flag_directory 00:06:55.409 ************************************ 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.409 00:06:55.409 real 0m1.122s 00:06:55.409 user 0m0.634s 00:06:55.409 sys 0m0.277s 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:55.409 ************************************ 00:06:55.409 START TEST dd_flag_nofollow 00:06:55.409 ************************************ 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.409 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.667 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.667 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.667 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.667 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.667 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.667 21:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.667 [2024-07-15 21:20:28.829554] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:55.667 [2024-07-15 21:20:28.829623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63029 ] 00:06:55.667 [2024-07-15 21:20:28.970723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.925 [2024-07-15 21:20:29.072418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.925 [2024-07-15 21:20:29.114487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.925 [2024-07-15 21:20:29.142997] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:55.925 [2024-07-15 21:20:29.143045] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:55.925 [2024-07-15 21:20:29.143075] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.925 [2024-07-15 21:20:29.235652] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.183 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.183 [2024-07-15 21:20:29.383810] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:56.183 [2024-07-15 21:20:29.383895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63044 ] 00:06:56.183 [2024-07-15 21:20:29.523758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.441 [2024-07-15 21:20:29.621495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.441 [2024-07-15 21:20:29.661613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.442 [2024-07-15 21:20:29.688509] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:56.442 [2024-07-15 21:20:29.688555] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:56.442 [2024-07-15 21:20:29.688569] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.442 [2024-07-15 21:20:29.778492] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:56.700 21:20:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.700 [2024-07-15 21:20:29.911844] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:56.700 [2024-07-15 21:20:29.912046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63046 ] 00:06:56.700 [2024-07-15 21:20:30.044479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.958 [2024-07-15 21:20:30.137038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.958 [2024-07-15 21:20:30.177174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.217  Copying: 512/512 [B] (average 500 kBps) 00:06:57.217 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 4m0z8x4kkrnthoc3x17j1zupgd4weh0fx7uw65rehniqkt88ollgsrtjzr96nxpc1t2a9m3jdfclg9dt0j7ftvst4qoj6nks1fwb32qawtcwx82i037k68v6cagws5vbbk3byd8l0td0en0jyiu9lkzgbpg3i1k6borfxsucbuybodl3425zow6xk0vie712e8k084xdp9hq7tcey85xaa2ewu2c8dgrhtt5j10ti7k71yu6wzc67ns02bz9k3zd1oadwevb6wntbpavt0dpx8skg509zrc40ayym3r2j1kkghsakfxji02vcme9bmxdeyu9i8xaaft4gepippzn90y436ja87tiulqi4dyxxrcsn69xsh8zebiupqdy3u87piqzjxg10xi2z3pqkfssdbrfq1r8z96phaj0pz6rftki7le9fx54ssnguvtabkd6fr357klycmux1buym85uv4hujn2xiczwxwo1mwq308bkwib53v9b26ighfq51d0b == \4\m\0\z\8\x\4\k\k\r\n\t\h\o\c\3\x\1\7\j\1\z\u\p\g\d\4\w\e\h\0\f\x\7\u\w\6\5\r\e\h\n\i\q\k\t\8\8\o\l\l\g\s\r\t\j\z\r\9\6\n\x\p\c\1\t\2\a\9\m\3\j\d\f\c\l\g\9\d\t\0\j\7\f\t\v\s\t\4\q\o\j\6\n\k\s\1\f\w\b\3\2\q\a\w\t\c\w\x\8\2\i\0\3\7\k\6\8\v\6\c\a\g\w\s\5\v\b\b\k\3\b\y\d\8\l\0\t\d\0\e\n\0\j\y\i\u\9\l\k\z\g\b\p\g\3\i\1\k\6\b\o\r\f\x\s\u\c\b\u\y\b\o\d\l\3\4\2\5\z\o\w\6\x\k\0\v\i\e\7\1\2\e\8\k\0\8\4\x\d\p\9\h\q\7\t\c\e\y\8\5\x\a\a\2\e\w\u\2\c\8\d\g\r\h\t\t\5\j\1\0\t\i\7\k\7\1\y\u\6\w\z\c\6\7\n\s\0\2\b\z\9\k\3\z\d\1\o\a\d\w\e\v\b\6\w\n\t\b\p\a\v\t\0\d\p\x\8\s\k\g\5\0\9\z\r\c\4\0\a\y\y\m\3\r\2\j\1\k\k\g\h\s\a\k\f\x\j\i\0\2\v\c\m\e\9\b\m\x\d\e\y\u\9\i\8\x\a\a\f\t\4\g\e\p\i\p\p\z\n\9\0\y\4\3\6\j\a\8\7\t\i\u\l\q\i\4\d\y\x\x\r\c\s\n\6\9\x\s\h\8\z\e\b\i\u\p\q\d\y\3\u\8\7\p\i\q\z\j\x\g\1\0\x\i\2\z\3\p\q\k\f\s\s\d\b\r\f\q\1\r\8\z\9\6\p\h\a\j\0\p\z\6\r\f\t\k\i\7\l\e\9\f\x\5\4\s\s\n\g\u\v\t\a\b\k\d\6\f\r\3\5\7\k\l\y\c\m\u\x\1\b\u\y\m\8\5\u\v\4\h\u\j\n\2\x\i\c\z\w\x\w\o\1\m\w\q\3\0\8\b\k\w\i\b\5\3\v\9\b\2\6\i\g\h\f\q\5\1\d\0\b ]] 00:06:57.217 00:06:57.217 real 0m1.616s 00:06:57.217 user 0m0.886s 00:06:57.217 sys 0m0.512s 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:57.217 ************************************ 00:06:57.217 END TEST dd_flag_nofollow 00:06:57.217 ************************************ 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:57.217 ************************************ 00:06:57.217 START TEST dd_flag_noatime 00:06:57.217 ************************************ 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721078430 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721078430 00:06:57.217 21:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:58.152 21:20:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.411 [2024-07-15 21:20:31.527904] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:58.411 [2024-07-15 21:20:31.527974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63094 ] 00:06:58.411 [2024-07-15 21:20:31.668271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.411 [2024-07-15 21:20:31.766111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.669 [2024-07-15 21:20:31.810924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.669  Copying: 512/512 [B] (average 500 kBps) 00:06:58.670 00:06:58.928 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.928 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721078430 )) 00:06:58.928 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.928 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721078430 )) 00:06:58.928 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.928 [2024-07-15 21:20:32.115468] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:58.928 [2024-07-15 21:20:32.115560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63102 ] 00:06:58.928 [2024-07-15 21:20:32.248872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.187 [2024-07-15 21:20:32.374334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.187 [2024-07-15 21:20:32.428186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.446  Copying: 512/512 [B] (average 500 kBps) 00:06:59.446 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.446 ************************************ 00:06:59.446 END TEST dd_flag_noatime 00:06:59.446 ************************************ 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721078432 )) 00:06:59.446 00:06:59.446 real 0m2.222s 00:06:59.446 user 0m0.697s 00:06:59.446 sys 0m0.546s 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:59.446 ************************************ 00:06:59.446 START TEST dd_flags_misc 00:06:59.446 ************************************ 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.446 21:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:59.446 [2024-07-15 21:20:32.802275] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:06:59.446 [2024-07-15 21:20:32.802549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63136 ] 00:06:59.706 [2024-07-15 21:20:32.945076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.706 [2024-07-15 21:20:33.052895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.965 [2024-07-15 21:20:33.098074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.965  Copying: 512/512 [B] (average 500 kBps) 00:06:59.965 00:06:59.965 21:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lnwjltlgtwqwjrsgivy9nqcgb6rtxne5rdr0zyn8vohiwdr9fqd9p4rcaeco9nkzkivymc47dr5hvolm986h763mva06dc011lnlieuu7c75sioh8cf42yzw1pj708ot0mpmgfsy8r901mlk8dij3bi867n5mrujh2zfvaezrkfqxgv519o4hfmsoyuqjuzzqavilprknk8kz43p5ycsd16s9l8f3ow5pt4q2nrdtwcnctbsbsas8x6decif90vwkyz277shmqhra88ewku65lniaueurdqyz7sbaucm84smmmx1kjtqr9aesekuqn4coqnxkoy2dcnmbrf8vbajb27mfxy258jdq8hnffuj0circgd0f2qskx3vtnl644kzoy0aqsgrh81n586xsxdssh2rctzjnr4qft92uqwcxcosm9qj8wqo923fjg6fg4eml19rhax3aiswlfp5fpoyfztjcbvs5qrzjmpx3m3r1yofjjhakrusmvhkrjc7h7se == \l\n\w\j\l\t\l\g\t\w\q\w\j\r\s\g\i\v\y\9\n\q\c\g\b\6\r\t\x\n\e\5\r\d\r\0\z\y\n\8\v\o\h\i\w\d\r\9\f\q\d\9\p\4\r\c\a\e\c\o\9\n\k\z\k\i\v\y\m\c\4\7\d\r\5\h\v\o\l\m\9\8\6\h\7\6\3\m\v\a\0\6\d\c\0\1\1\l\n\l\i\e\u\u\7\c\7\5\s\i\o\h\8\c\f\4\2\y\z\w\1\p\j\7\0\8\o\t\0\m\p\m\g\f\s\y\8\r\9\0\1\m\l\k\8\d\i\j\3\b\i\8\6\7\n\5\m\r\u\j\h\2\z\f\v\a\e\z\r\k\f\q\x\g\v\5\1\9\o\4\h\f\m\s\o\y\u\q\j\u\z\z\q\a\v\i\l\p\r\k\n\k\8\k\z\4\3\p\5\y\c\s\d\1\6\s\9\l\8\f\3\o\w\5\p\t\4\q\2\n\r\d\t\w\c\n\c\t\b\s\b\s\a\s\8\x\6\d\e\c\i\f\9\0\v\w\k\y\z\2\7\7\s\h\m\q\h\r\a\8\8\e\w\k\u\6\5\l\n\i\a\u\e\u\r\d\q\y\z\7\s\b\a\u\c\m\8\4\s\m\m\m\x\1\k\j\t\q\r\9\a\e\s\e\k\u\q\n\4\c\o\q\n\x\k\o\y\2\d\c\n\m\b\r\f\8\v\b\a\j\b\2\7\m\f\x\y\2\5\8\j\d\q\8\h\n\f\f\u\j\0\c\i\r\c\g\d\0\f\2\q\s\k\x\3\v\t\n\l\6\4\4\k\z\o\y\0\a\q\s\g\r\h\8\1\n\5\8\6\x\s\x\d\s\s\h\2\r\c\t\z\j\n\r\4\q\f\t\9\2\u\q\w\c\x\c\o\s\m\9\q\j\8\w\q\o\9\2\3\f\j\g\6\f\g\4\e\m\l\1\9\r\h\a\x\3\a\i\s\w\l\f\p\5\f\p\o\y\f\z\t\j\c\b\v\s\5\q\r\z\j\m\p\x\3\m\3\r\1\y\o\f\j\j\h\a\k\r\u\s\m\v\h\k\r\j\c\7\h\7\s\e ]] 00:06:59.965 21:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.965 21:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:00.223 [2024-07-15 21:20:33.372959] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:00.223 [2024-07-15 21:20:33.373189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63151 ] 00:07:00.223 [2024-07-15 21:20:33.515409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.481 [2024-07-15 21:20:33.622128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.481 [2024-07-15 21:20:33.667078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.740  Copying: 512/512 [B] (average 500 kBps) 00:07:00.740 00:07:00.740 21:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lnwjltlgtwqwjrsgivy9nqcgb6rtxne5rdr0zyn8vohiwdr9fqd9p4rcaeco9nkzkivymc47dr5hvolm986h763mva06dc011lnlieuu7c75sioh8cf42yzw1pj708ot0mpmgfsy8r901mlk8dij3bi867n5mrujh2zfvaezrkfqxgv519o4hfmsoyuqjuzzqavilprknk8kz43p5ycsd16s9l8f3ow5pt4q2nrdtwcnctbsbsas8x6decif90vwkyz277shmqhra88ewku65lniaueurdqyz7sbaucm84smmmx1kjtqr9aesekuqn4coqnxkoy2dcnmbrf8vbajb27mfxy258jdq8hnffuj0circgd0f2qskx3vtnl644kzoy0aqsgrh81n586xsxdssh2rctzjnr4qft92uqwcxcosm9qj8wqo923fjg6fg4eml19rhax3aiswlfp5fpoyfztjcbvs5qrzjmpx3m3r1yofjjhakrusmvhkrjc7h7se == \l\n\w\j\l\t\l\g\t\w\q\w\j\r\s\g\i\v\y\9\n\q\c\g\b\6\r\t\x\n\e\5\r\d\r\0\z\y\n\8\v\o\h\i\w\d\r\9\f\q\d\9\p\4\r\c\a\e\c\o\9\n\k\z\k\i\v\y\m\c\4\7\d\r\5\h\v\o\l\m\9\8\6\h\7\6\3\m\v\a\0\6\d\c\0\1\1\l\n\l\i\e\u\u\7\c\7\5\s\i\o\h\8\c\f\4\2\y\z\w\1\p\j\7\0\8\o\t\0\m\p\m\g\f\s\y\8\r\9\0\1\m\l\k\8\d\i\j\3\b\i\8\6\7\n\5\m\r\u\j\h\2\z\f\v\a\e\z\r\k\f\q\x\g\v\5\1\9\o\4\h\f\m\s\o\y\u\q\j\u\z\z\q\a\v\i\l\p\r\k\n\k\8\k\z\4\3\p\5\y\c\s\d\1\6\s\9\l\8\f\3\o\w\5\p\t\4\q\2\n\r\d\t\w\c\n\c\t\b\s\b\s\a\s\8\x\6\d\e\c\i\f\9\0\v\w\k\y\z\2\7\7\s\h\m\q\h\r\a\8\8\e\w\k\u\6\5\l\n\i\a\u\e\u\r\d\q\y\z\7\s\b\a\u\c\m\8\4\s\m\m\m\x\1\k\j\t\q\r\9\a\e\s\e\k\u\q\n\4\c\o\q\n\x\k\o\y\2\d\c\n\m\b\r\f\8\v\b\a\j\b\2\7\m\f\x\y\2\5\8\j\d\q\8\h\n\f\f\u\j\0\c\i\r\c\g\d\0\f\2\q\s\k\x\3\v\t\n\l\6\4\4\k\z\o\y\0\a\q\s\g\r\h\8\1\n\5\8\6\x\s\x\d\s\s\h\2\r\c\t\z\j\n\r\4\q\f\t\9\2\u\q\w\c\x\c\o\s\m\9\q\j\8\w\q\o\9\2\3\f\j\g\6\f\g\4\e\m\l\1\9\r\h\a\x\3\a\i\s\w\l\f\p\5\f\p\o\y\f\z\t\j\c\b\v\s\5\q\r\z\j\m\p\x\3\m\3\r\1\y\o\f\j\j\h\a\k\r\u\s\m\v\h\k\r\j\c\7\h\7\s\e ]] 00:07:00.740 21:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:00.740 21:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:00.740 [2024-07-15 21:20:33.938094] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:00.740 [2024-07-15 21:20:33.938190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:07:00.740 [2024-07-15 21:20:34.080206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.998 [2024-07-15 21:20:34.187703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.998 [2024-07-15 21:20:34.232420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.257  Copying: 512/512 [B] (average 100 kBps) 00:07:01.257 00:07:01.257 21:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lnwjltlgtwqwjrsgivy9nqcgb6rtxne5rdr0zyn8vohiwdr9fqd9p4rcaeco9nkzkivymc47dr5hvolm986h763mva06dc011lnlieuu7c75sioh8cf42yzw1pj708ot0mpmgfsy8r901mlk8dij3bi867n5mrujh2zfvaezrkfqxgv519o4hfmsoyuqjuzzqavilprknk8kz43p5ycsd16s9l8f3ow5pt4q2nrdtwcnctbsbsas8x6decif90vwkyz277shmqhra88ewku65lniaueurdqyz7sbaucm84smmmx1kjtqr9aesekuqn4coqnxkoy2dcnmbrf8vbajb27mfxy258jdq8hnffuj0circgd0f2qskx3vtnl644kzoy0aqsgrh81n586xsxdssh2rctzjnr4qft92uqwcxcosm9qj8wqo923fjg6fg4eml19rhax3aiswlfp5fpoyfztjcbvs5qrzjmpx3m3r1yofjjhakrusmvhkrjc7h7se == \l\n\w\j\l\t\l\g\t\w\q\w\j\r\s\g\i\v\y\9\n\q\c\g\b\6\r\t\x\n\e\5\r\d\r\0\z\y\n\8\v\o\h\i\w\d\r\9\f\q\d\9\p\4\r\c\a\e\c\o\9\n\k\z\k\i\v\y\m\c\4\7\d\r\5\h\v\o\l\m\9\8\6\h\7\6\3\m\v\a\0\6\d\c\0\1\1\l\n\l\i\e\u\u\7\c\7\5\s\i\o\h\8\c\f\4\2\y\z\w\1\p\j\7\0\8\o\t\0\m\p\m\g\f\s\y\8\r\9\0\1\m\l\k\8\d\i\j\3\b\i\8\6\7\n\5\m\r\u\j\h\2\z\f\v\a\e\z\r\k\f\q\x\g\v\5\1\9\o\4\h\f\m\s\o\y\u\q\j\u\z\z\q\a\v\i\l\p\r\k\n\k\8\k\z\4\3\p\5\y\c\s\d\1\6\s\9\l\8\f\3\o\w\5\p\t\4\q\2\n\r\d\t\w\c\n\c\t\b\s\b\s\a\s\8\x\6\d\e\c\i\f\9\0\v\w\k\y\z\2\7\7\s\h\m\q\h\r\a\8\8\e\w\k\u\6\5\l\n\i\a\u\e\u\r\d\q\y\z\7\s\b\a\u\c\m\8\4\s\m\m\m\x\1\k\j\t\q\r\9\a\e\s\e\k\u\q\n\4\c\o\q\n\x\k\o\y\2\d\c\n\m\b\r\f\8\v\b\a\j\b\2\7\m\f\x\y\2\5\8\j\d\q\8\h\n\f\f\u\j\0\c\i\r\c\g\d\0\f\2\q\s\k\x\3\v\t\n\l\6\4\4\k\z\o\y\0\a\q\s\g\r\h\8\1\n\5\8\6\x\s\x\d\s\s\h\2\r\c\t\z\j\n\r\4\q\f\t\9\2\u\q\w\c\x\c\o\s\m\9\q\j\8\w\q\o\9\2\3\f\j\g\6\f\g\4\e\m\l\1\9\r\h\a\x\3\a\i\s\w\l\f\p\5\f\p\o\y\f\z\t\j\c\b\v\s\5\q\r\z\j\m\p\x\3\m\3\r\1\y\o\f\j\j\h\a\k\r\u\s\m\v\h\k\r\j\c\7\h\7\s\e ]] 00:07:01.257 21:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.257 21:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:01.257 [2024-07-15 21:20:34.514561] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:01.257 [2024-07-15 21:20:34.514636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63170 ] 00:07:01.516 [2024-07-15 21:20:34.656517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.516 [2024-07-15 21:20:34.758648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.516 [2024-07-15 21:20:34.801221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.774  Copying: 512/512 [B] (average 250 kBps) 00:07:01.774 00:07:01.774 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lnwjltlgtwqwjrsgivy9nqcgb6rtxne5rdr0zyn8vohiwdr9fqd9p4rcaeco9nkzkivymc47dr5hvolm986h763mva06dc011lnlieuu7c75sioh8cf42yzw1pj708ot0mpmgfsy8r901mlk8dij3bi867n5mrujh2zfvaezrkfqxgv519o4hfmsoyuqjuzzqavilprknk8kz43p5ycsd16s9l8f3ow5pt4q2nrdtwcnctbsbsas8x6decif90vwkyz277shmqhra88ewku65lniaueurdqyz7sbaucm84smmmx1kjtqr9aesekuqn4coqnxkoy2dcnmbrf8vbajb27mfxy258jdq8hnffuj0circgd0f2qskx3vtnl644kzoy0aqsgrh81n586xsxdssh2rctzjnr4qft92uqwcxcosm9qj8wqo923fjg6fg4eml19rhax3aiswlfp5fpoyfztjcbvs5qrzjmpx3m3r1yofjjhakrusmvhkrjc7h7se == \l\n\w\j\l\t\l\g\t\w\q\w\j\r\s\g\i\v\y\9\n\q\c\g\b\6\r\t\x\n\e\5\r\d\r\0\z\y\n\8\v\o\h\i\w\d\r\9\f\q\d\9\p\4\r\c\a\e\c\o\9\n\k\z\k\i\v\y\m\c\4\7\d\r\5\h\v\o\l\m\9\8\6\h\7\6\3\m\v\a\0\6\d\c\0\1\1\l\n\l\i\e\u\u\7\c\7\5\s\i\o\h\8\c\f\4\2\y\z\w\1\p\j\7\0\8\o\t\0\m\p\m\g\f\s\y\8\r\9\0\1\m\l\k\8\d\i\j\3\b\i\8\6\7\n\5\m\r\u\j\h\2\z\f\v\a\e\z\r\k\f\q\x\g\v\5\1\9\o\4\h\f\m\s\o\y\u\q\j\u\z\z\q\a\v\i\l\p\r\k\n\k\8\k\z\4\3\p\5\y\c\s\d\1\6\s\9\l\8\f\3\o\w\5\p\t\4\q\2\n\r\d\t\w\c\n\c\t\b\s\b\s\a\s\8\x\6\d\e\c\i\f\9\0\v\w\k\y\z\2\7\7\s\h\m\q\h\r\a\8\8\e\w\k\u\6\5\l\n\i\a\u\e\u\r\d\q\y\z\7\s\b\a\u\c\m\8\4\s\m\m\m\x\1\k\j\t\q\r\9\a\e\s\e\k\u\q\n\4\c\o\q\n\x\k\o\y\2\d\c\n\m\b\r\f\8\v\b\a\j\b\2\7\m\f\x\y\2\5\8\j\d\q\8\h\n\f\f\u\j\0\c\i\r\c\g\d\0\f\2\q\s\k\x\3\v\t\n\l\6\4\4\k\z\o\y\0\a\q\s\g\r\h\8\1\n\5\8\6\x\s\x\d\s\s\h\2\r\c\t\z\j\n\r\4\q\f\t\9\2\u\q\w\c\x\c\o\s\m\9\q\j\8\w\q\o\9\2\3\f\j\g\6\f\g\4\e\m\l\1\9\r\h\a\x\3\a\i\s\w\l\f\p\5\f\p\o\y\f\z\t\j\c\b\v\s\5\q\r\z\j\m\p\x\3\m\3\r\1\y\o\f\j\j\h\a\k\r\u\s\m\v\h\k\r\j\c\7\h\7\s\e ]] 00:07:01.774 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:01.774 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:01.774 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:01.775 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:01.775 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.775 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:01.775 [2024-07-15 21:20:35.068489] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:01.775 [2024-07-15 21:20:35.068559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63174 ] 00:07:02.034 [2024-07-15 21:20:35.209106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.034 [2024-07-15 21:20:35.302516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.034 [2024-07-15 21:20:35.344359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.293  Copying: 512/512 [B] (average 500 kBps) 00:07:02.293 00:07:02.293 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ envskifz7spkwh4ieikqfwe2vlrztn36sb9p2sy2bn2pftgr94of0wloewqi3dy21f9c9pw34pgcrviq05q7fixwpoy9ztd0v08hnyghwj3lox0wjp0xw8tgqze60iy8mtxh3rhx9zboeyvwft7uwgces4rx4w3rkcpj0npn08x69k2nryrhd0oqzi8c2gkags9u73j5h6rzvugqy5h6bxfb4ngiov4vz7v8eoypsllsrwdu86cp1g3ywww33tj9i6zyctl0v5ypgau1aas9spr7quv27o8jxbyk2cjex3arrws175t2n118e2manw1ui2tvbqvhngi8rkr7z0g64s20ir94j739ctgif4bvekwcm61j2b8gekw8b9flyd7a7cl77x0oqef0pbnulica3i409lc5ylpwefgz0387lsov33ea9545o9tsib049g7fz79ifebnr8zsjidkii2pyuget4ao7z0on5bjk80ayjf1afz1rzjsv1goy15livuw == \e\n\v\s\k\i\f\z\7\s\p\k\w\h\4\i\e\i\k\q\f\w\e\2\v\l\r\z\t\n\3\6\s\b\9\p\2\s\y\2\b\n\2\p\f\t\g\r\9\4\o\f\0\w\l\o\e\w\q\i\3\d\y\2\1\f\9\c\9\p\w\3\4\p\g\c\r\v\i\q\0\5\q\7\f\i\x\w\p\o\y\9\z\t\d\0\v\0\8\h\n\y\g\h\w\j\3\l\o\x\0\w\j\p\0\x\w\8\t\g\q\z\e\6\0\i\y\8\m\t\x\h\3\r\h\x\9\z\b\o\e\y\v\w\f\t\7\u\w\g\c\e\s\4\r\x\4\w\3\r\k\c\p\j\0\n\p\n\0\8\x\6\9\k\2\n\r\y\r\h\d\0\o\q\z\i\8\c\2\g\k\a\g\s\9\u\7\3\j\5\h\6\r\z\v\u\g\q\y\5\h\6\b\x\f\b\4\n\g\i\o\v\4\v\z\7\v\8\e\o\y\p\s\l\l\s\r\w\d\u\8\6\c\p\1\g\3\y\w\w\w\3\3\t\j\9\i\6\z\y\c\t\l\0\v\5\y\p\g\a\u\1\a\a\s\9\s\p\r\7\q\u\v\2\7\o\8\j\x\b\y\k\2\c\j\e\x\3\a\r\r\w\s\1\7\5\t\2\n\1\1\8\e\2\m\a\n\w\1\u\i\2\t\v\b\q\v\h\n\g\i\8\r\k\r\7\z\0\g\6\4\s\2\0\i\r\9\4\j\7\3\9\c\t\g\i\f\4\b\v\e\k\w\c\m\6\1\j\2\b\8\g\e\k\w\8\b\9\f\l\y\d\7\a\7\c\l\7\7\x\0\o\q\e\f\0\p\b\n\u\l\i\c\a\3\i\4\0\9\l\c\5\y\l\p\w\e\f\g\z\0\3\8\7\l\s\o\v\3\3\e\a\9\5\4\5\o\9\t\s\i\b\0\4\9\g\7\f\z\7\9\i\f\e\b\n\r\8\z\s\j\i\d\k\i\i\2\p\y\u\g\e\t\4\a\o\7\z\0\o\n\5\b\j\k\8\0\a\y\j\f\1\a\f\z\1\r\z\j\s\v\1\g\o\y\1\5\l\i\v\u\w ]] 00:07:02.293 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.293 21:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:02.293 [2024-07-15 21:20:35.600697] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:02.293 [2024-07-15 21:20:35.600771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63189 ] 00:07:02.554 [2024-07-15 21:20:35.740547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.554 [2024-07-15 21:20:35.838373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.554 [2024-07-15 21:20:35.880991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.813  Copying: 512/512 [B] (average 500 kBps) 00:07:02.813 00:07:02.814 21:20:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ envskifz7spkwh4ieikqfwe2vlrztn36sb9p2sy2bn2pftgr94of0wloewqi3dy21f9c9pw34pgcrviq05q7fixwpoy9ztd0v08hnyghwj3lox0wjp0xw8tgqze60iy8mtxh3rhx9zboeyvwft7uwgces4rx4w3rkcpj0npn08x69k2nryrhd0oqzi8c2gkags9u73j5h6rzvugqy5h6bxfb4ngiov4vz7v8eoypsllsrwdu86cp1g3ywww33tj9i6zyctl0v5ypgau1aas9spr7quv27o8jxbyk2cjex3arrws175t2n118e2manw1ui2tvbqvhngi8rkr7z0g64s20ir94j739ctgif4bvekwcm61j2b8gekw8b9flyd7a7cl77x0oqef0pbnulica3i409lc5ylpwefgz0387lsov33ea9545o9tsib049g7fz79ifebnr8zsjidkii2pyuget4ao7z0on5bjk80ayjf1afz1rzjsv1goy15livuw == \e\n\v\s\k\i\f\z\7\s\p\k\w\h\4\i\e\i\k\q\f\w\e\2\v\l\r\z\t\n\3\6\s\b\9\p\2\s\y\2\b\n\2\p\f\t\g\r\9\4\o\f\0\w\l\o\e\w\q\i\3\d\y\2\1\f\9\c\9\p\w\3\4\p\g\c\r\v\i\q\0\5\q\7\f\i\x\w\p\o\y\9\z\t\d\0\v\0\8\h\n\y\g\h\w\j\3\l\o\x\0\w\j\p\0\x\w\8\t\g\q\z\e\6\0\i\y\8\m\t\x\h\3\r\h\x\9\z\b\o\e\y\v\w\f\t\7\u\w\g\c\e\s\4\r\x\4\w\3\r\k\c\p\j\0\n\p\n\0\8\x\6\9\k\2\n\r\y\r\h\d\0\o\q\z\i\8\c\2\g\k\a\g\s\9\u\7\3\j\5\h\6\r\z\v\u\g\q\y\5\h\6\b\x\f\b\4\n\g\i\o\v\4\v\z\7\v\8\e\o\y\p\s\l\l\s\r\w\d\u\8\6\c\p\1\g\3\y\w\w\w\3\3\t\j\9\i\6\z\y\c\t\l\0\v\5\y\p\g\a\u\1\a\a\s\9\s\p\r\7\q\u\v\2\7\o\8\j\x\b\y\k\2\c\j\e\x\3\a\r\r\w\s\1\7\5\t\2\n\1\1\8\e\2\m\a\n\w\1\u\i\2\t\v\b\q\v\h\n\g\i\8\r\k\r\7\z\0\g\6\4\s\2\0\i\r\9\4\j\7\3\9\c\t\g\i\f\4\b\v\e\k\w\c\m\6\1\j\2\b\8\g\e\k\w\8\b\9\f\l\y\d\7\a\7\c\l\7\7\x\0\o\q\e\f\0\p\b\n\u\l\i\c\a\3\i\4\0\9\l\c\5\y\l\p\w\e\f\g\z\0\3\8\7\l\s\o\v\3\3\e\a\9\5\4\5\o\9\t\s\i\b\0\4\9\g\7\f\z\7\9\i\f\e\b\n\r\8\z\s\j\i\d\k\i\i\2\p\y\u\g\e\t\4\a\o\7\z\0\o\n\5\b\j\k\8\0\a\y\j\f\1\a\f\z\1\r\z\j\s\v\1\g\o\y\1\5\l\i\v\u\w ]] 00:07:02.814 21:20:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.814 21:20:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:02.814 [2024-07-15 21:20:36.141009] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:02.814 [2024-07-15 21:20:36.141081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63193 ] 00:07:03.072 [2024-07-15 21:20:36.282194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.072 [2024-07-15 21:20:36.379674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.072 [2024-07-15 21:20:36.421149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.332  Copying: 512/512 [B] (average 250 kBps) 00:07:03.332 00:07:03.332 21:20:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ envskifz7spkwh4ieikqfwe2vlrztn36sb9p2sy2bn2pftgr94of0wloewqi3dy21f9c9pw34pgcrviq05q7fixwpoy9ztd0v08hnyghwj3lox0wjp0xw8tgqze60iy8mtxh3rhx9zboeyvwft7uwgces4rx4w3rkcpj0npn08x69k2nryrhd0oqzi8c2gkags9u73j5h6rzvugqy5h6bxfb4ngiov4vz7v8eoypsllsrwdu86cp1g3ywww33tj9i6zyctl0v5ypgau1aas9spr7quv27o8jxbyk2cjex3arrws175t2n118e2manw1ui2tvbqvhngi8rkr7z0g64s20ir94j739ctgif4bvekwcm61j2b8gekw8b9flyd7a7cl77x0oqef0pbnulica3i409lc5ylpwefgz0387lsov33ea9545o9tsib049g7fz79ifebnr8zsjidkii2pyuget4ao7z0on5bjk80ayjf1afz1rzjsv1goy15livuw == \e\n\v\s\k\i\f\z\7\s\p\k\w\h\4\i\e\i\k\q\f\w\e\2\v\l\r\z\t\n\3\6\s\b\9\p\2\s\y\2\b\n\2\p\f\t\g\r\9\4\o\f\0\w\l\o\e\w\q\i\3\d\y\2\1\f\9\c\9\p\w\3\4\p\g\c\r\v\i\q\0\5\q\7\f\i\x\w\p\o\y\9\z\t\d\0\v\0\8\h\n\y\g\h\w\j\3\l\o\x\0\w\j\p\0\x\w\8\t\g\q\z\e\6\0\i\y\8\m\t\x\h\3\r\h\x\9\z\b\o\e\y\v\w\f\t\7\u\w\g\c\e\s\4\r\x\4\w\3\r\k\c\p\j\0\n\p\n\0\8\x\6\9\k\2\n\r\y\r\h\d\0\o\q\z\i\8\c\2\g\k\a\g\s\9\u\7\3\j\5\h\6\r\z\v\u\g\q\y\5\h\6\b\x\f\b\4\n\g\i\o\v\4\v\z\7\v\8\e\o\y\p\s\l\l\s\r\w\d\u\8\6\c\p\1\g\3\y\w\w\w\3\3\t\j\9\i\6\z\y\c\t\l\0\v\5\y\p\g\a\u\1\a\a\s\9\s\p\r\7\q\u\v\2\7\o\8\j\x\b\y\k\2\c\j\e\x\3\a\r\r\w\s\1\7\5\t\2\n\1\1\8\e\2\m\a\n\w\1\u\i\2\t\v\b\q\v\h\n\g\i\8\r\k\r\7\z\0\g\6\4\s\2\0\i\r\9\4\j\7\3\9\c\t\g\i\f\4\b\v\e\k\w\c\m\6\1\j\2\b\8\g\e\k\w\8\b\9\f\l\y\d\7\a\7\c\l\7\7\x\0\o\q\e\f\0\p\b\n\u\l\i\c\a\3\i\4\0\9\l\c\5\y\l\p\w\e\f\g\z\0\3\8\7\l\s\o\v\3\3\e\a\9\5\4\5\o\9\t\s\i\b\0\4\9\g\7\f\z\7\9\i\f\e\b\n\r\8\z\s\j\i\d\k\i\i\2\p\y\u\g\e\t\4\a\o\7\z\0\o\n\5\b\j\k\8\0\a\y\j\f\1\a\f\z\1\r\z\j\s\v\1\g\o\y\1\5\l\i\v\u\w ]] 00:07:03.332 21:20:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.332 21:20:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:03.332 [2024-07-15 21:20:36.680702] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:03.332 [2024-07-15 21:20:36.680770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63208 ] 00:07:03.591 [2024-07-15 21:20:36.821444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.591 [2024-07-15 21:20:36.918637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.851 [2024-07-15 21:20:36.960395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.851  Copying: 512/512 [B] (average 250 kBps) 00:07:03.851 00:07:03.851 ************************************ 00:07:03.851 END TEST dd_flags_misc 00:07:03.851 ************************************ 00:07:03.851 21:20:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ envskifz7spkwh4ieikqfwe2vlrztn36sb9p2sy2bn2pftgr94of0wloewqi3dy21f9c9pw34pgcrviq05q7fixwpoy9ztd0v08hnyghwj3lox0wjp0xw8tgqze60iy8mtxh3rhx9zboeyvwft7uwgces4rx4w3rkcpj0npn08x69k2nryrhd0oqzi8c2gkags9u73j5h6rzvugqy5h6bxfb4ngiov4vz7v8eoypsllsrwdu86cp1g3ywww33tj9i6zyctl0v5ypgau1aas9spr7quv27o8jxbyk2cjex3arrws175t2n118e2manw1ui2tvbqvhngi8rkr7z0g64s20ir94j739ctgif4bvekwcm61j2b8gekw8b9flyd7a7cl77x0oqef0pbnulica3i409lc5ylpwefgz0387lsov33ea9545o9tsib049g7fz79ifebnr8zsjidkii2pyuget4ao7z0on5bjk80ayjf1afz1rzjsv1goy15livuw == \e\n\v\s\k\i\f\z\7\s\p\k\w\h\4\i\e\i\k\q\f\w\e\2\v\l\r\z\t\n\3\6\s\b\9\p\2\s\y\2\b\n\2\p\f\t\g\r\9\4\o\f\0\w\l\o\e\w\q\i\3\d\y\2\1\f\9\c\9\p\w\3\4\p\g\c\r\v\i\q\0\5\q\7\f\i\x\w\p\o\y\9\z\t\d\0\v\0\8\h\n\y\g\h\w\j\3\l\o\x\0\w\j\p\0\x\w\8\t\g\q\z\e\6\0\i\y\8\m\t\x\h\3\r\h\x\9\z\b\o\e\y\v\w\f\t\7\u\w\g\c\e\s\4\r\x\4\w\3\r\k\c\p\j\0\n\p\n\0\8\x\6\9\k\2\n\r\y\r\h\d\0\o\q\z\i\8\c\2\g\k\a\g\s\9\u\7\3\j\5\h\6\r\z\v\u\g\q\y\5\h\6\b\x\f\b\4\n\g\i\o\v\4\v\z\7\v\8\e\o\y\p\s\l\l\s\r\w\d\u\8\6\c\p\1\g\3\y\w\w\w\3\3\t\j\9\i\6\z\y\c\t\l\0\v\5\y\p\g\a\u\1\a\a\s\9\s\p\r\7\q\u\v\2\7\o\8\j\x\b\y\k\2\c\j\e\x\3\a\r\r\w\s\1\7\5\t\2\n\1\1\8\e\2\m\a\n\w\1\u\i\2\t\v\b\q\v\h\n\g\i\8\r\k\r\7\z\0\g\6\4\s\2\0\i\r\9\4\j\7\3\9\c\t\g\i\f\4\b\v\e\k\w\c\m\6\1\j\2\b\8\g\e\k\w\8\b\9\f\l\y\d\7\a\7\c\l\7\7\x\0\o\q\e\f\0\p\b\n\u\l\i\c\a\3\i\4\0\9\l\c\5\y\l\p\w\e\f\g\z\0\3\8\7\l\s\o\v\3\3\e\a\9\5\4\5\o\9\t\s\i\b\0\4\9\g\7\f\z\7\9\i\f\e\b\n\r\8\z\s\j\i\d\k\i\i\2\p\y\u\g\e\t\4\a\o\7\z\0\o\n\5\b\j\k\8\0\a\y\j\f\1\a\f\z\1\r\z\j\s\v\1\g\o\y\1\5\l\i\v\u\w ]] 00:07:03.851 00:07:03.851 real 0m4.432s 00:07:03.851 user 0m2.511s 00:07:03.851 sys 0m1.931s 00:07:03.851 21:20:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.851 21:20:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:04.109 * Second test run, disabling liburing, forcing AIO 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.109 ************************************ 00:07:04.109 START TEST dd_flag_append_forced_aio 00:07:04.109 ************************************ 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:04.109 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=traj3mm8f6pi7b05whfjh894jkwtddol 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=w8jcx1byu72x8sr3cbkk4ryse7qrichv 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s traj3mm8f6pi7b05whfjh894jkwtddol 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s w8jcx1byu72x8sr3cbkk4ryse7qrichv 00:07:04.110 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:04.110 [2024-07-15 21:20:37.301484] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:04.110 [2024-07-15 21:20:37.301558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63232 ] 00:07:04.110 [2024-07-15 21:20:37.441640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.368 [2024-07-15 21:20:37.539067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.368 [2024-07-15 21:20:37.580504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.628  Copying: 32/32 [B] (average 31 kBps) 00:07:04.628 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ w8jcx1byu72x8sr3cbkk4ryse7qrichvtraj3mm8f6pi7b05whfjh894jkwtddol == \w\8\j\c\x\1\b\y\u\7\2\x\8\s\r\3\c\b\k\k\4\r\y\s\e\7\q\r\i\c\h\v\t\r\a\j\3\m\m\8\f\6\p\i\7\b\0\5\w\h\f\j\h\8\9\4\j\k\w\t\d\d\o\l ]] 00:07:04.628 00:07:04.628 real 0m0.561s 00:07:04.628 user 0m0.319s 00:07:04.628 sys 0m0.122s 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.628 ************************************ 00:07:04.628 END TEST dd_flag_append_forced_aio 00:07:04.628 ************************************ 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.628 ************************************ 00:07:04.628 START TEST dd_flag_directory_forced_aio 00:07:04.628 ************************************ 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.628 21:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.628 [2024-07-15 21:20:37.930772] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:04.628 [2024-07-15 21:20:37.930852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63263 ] 00:07:04.887 [2024-07-15 21:20:38.070795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.887 [2024-07-15 21:20:38.174736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.887 [2024-07-15 21:20:38.216442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.887 [2024-07-15 21:20:38.245511] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:04.887 [2024-07-15 21:20:38.245559] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:04.887 [2024-07-15 21:20:38.245572] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.146 [2024-07-15 21:20:38.340352] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.146 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:05.146 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.146 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:05.146 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.146 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:05.146 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.147 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:05.147 [2024-07-15 21:20:38.483302] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:05.147 [2024-07-15 21:20:38.483380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63273 ] 00:07:05.406 [2024-07-15 21:20:38.623529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.406 [2024-07-15 21:20:38.723974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.406 [2024-07-15 21:20:38.765686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.665 [2024-07-15 21:20:38.794118] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:05.665 [2024-07-15 21:20:38.794167] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:05.665 [2024-07-15 21:20:38.794179] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.665 [2024-07-15 21:20:38.884653] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.665 00:07:05.665 real 0m1.101s 00:07:05.665 user 0m0.629s 00:07:05.665 sys 0m0.262s 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.665 ************************************ 00:07:05.665 END TEST dd_flag_directory_forced_aio 00:07:05.665 ************************************ 00:07:05.665 21:20:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:05.665 21:20:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:05.665 21:20:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:05.665 21:20:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.665 21:20:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.665 21:20:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:05.962 ************************************ 00:07:05.962 START TEST dd_flag_nofollow_forced_aio 00:07:05.962 ************************************ 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.962 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.962 [2024-07-15 21:20:39.112441] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:05.962 [2024-07-15 21:20:39.112512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63301 ] 00:07:05.962 [2024-07-15 21:20:39.253938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.247 [2024-07-15 21:20:39.354020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.247 [2024-07-15 21:20:39.395226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.247 [2024-07-15 21:20:39.422219] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:06.247 [2024-07-15 21:20:39.422264] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:06.247 [2024-07-15 21:20:39.422277] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.247 [2024-07-15 21:20:39.515151] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.247 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:06.247 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.247 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:06.247 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:06.248 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:06.248 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.248 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:06.248 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:06.248 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:06.248 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.506 21:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:06.506 [2024-07-15 21:20:39.672810] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:06.506 [2024-07-15 21:20:39.672887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63317 ] 00:07:06.506 [2024-07-15 21:20:39.813767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.766 [2024-07-15 21:20:39.911934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.766 [2024-07-15 21:20:39.953587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.766 [2024-07-15 21:20:39.981860] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:06.766 [2024-07-15 21:20:39.981910] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:06.766 [2024-07-15 21:20:39.981932] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.766 [2024-07-15 21:20:40.074874] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.025 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.025 [2024-07-15 21:20:40.242547] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:07.025 [2024-07-15 21:20:40.242799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63320 ] 00:07:07.025 [2024-07-15 21:20:40.383181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.283 [2024-07-15 21:20:40.483936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.283 [2024-07-15 21:20:40.525376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.543  Copying: 512/512 [B] (average 500 kBps) 00:07:07.543 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ l6foo2fu8kqyjj2kr840pb0sw8zthp6b4n6y7k5qkrmcg16r4706xgx4jafg6bwz2i6qk6dwz1w5nxna48bn7aut10qdsm8q3qg4lhpy7gwfglvq6b04xhebmdyx73ma7mxbm3dput2co6alabvpnfcoluulciyjdhmdwbuhb5dradk973a3affbqnjukccfqqysiv0z5chqvk79n8jf0e6bh6m1hui2alvwnpceznednoljnx8xu9lrqwtfpqluowazl2fty6rjnu2a0d9fl6o3ozgpghul2xddb4gqh5issoiddgmsxgczw0bwsed6fu2r93bnz6c97zlm3lv0sp4dcxkw4lpdgitwwumq7ylfucrswl60fy5lb3gnfxoyg4g8nghch85274txwxgwi9trzlb7fg42896r88mdyufpbwwsum6y5fpkublsh5zpkykwo1ycaxzqa10fde33tvwi6rf25bzbhj6v3itqe4ed8e5ob5639vxanhul3uxw == \l\6\f\o\o\2\f\u\8\k\q\y\j\j\2\k\r\8\4\0\p\b\0\s\w\8\z\t\h\p\6\b\4\n\6\y\7\k\5\q\k\r\m\c\g\1\6\r\4\7\0\6\x\g\x\4\j\a\f\g\6\b\w\z\2\i\6\q\k\6\d\w\z\1\w\5\n\x\n\a\4\8\b\n\7\a\u\t\1\0\q\d\s\m\8\q\3\q\g\4\l\h\p\y\7\g\w\f\g\l\v\q\6\b\0\4\x\h\e\b\m\d\y\x\7\3\m\a\7\m\x\b\m\3\d\p\u\t\2\c\o\6\a\l\a\b\v\p\n\f\c\o\l\u\u\l\c\i\y\j\d\h\m\d\w\b\u\h\b\5\d\r\a\d\k\9\7\3\a\3\a\f\f\b\q\n\j\u\k\c\c\f\q\q\y\s\i\v\0\z\5\c\h\q\v\k\7\9\n\8\j\f\0\e\6\b\h\6\m\1\h\u\i\2\a\l\v\w\n\p\c\e\z\n\e\d\n\o\l\j\n\x\8\x\u\9\l\r\q\w\t\f\p\q\l\u\o\w\a\z\l\2\f\t\y\6\r\j\n\u\2\a\0\d\9\f\l\6\o\3\o\z\g\p\g\h\u\l\2\x\d\d\b\4\g\q\h\5\i\s\s\o\i\d\d\g\m\s\x\g\c\z\w\0\b\w\s\e\d\6\f\u\2\r\9\3\b\n\z\6\c\9\7\z\l\m\3\l\v\0\s\p\4\d\c\x\k\w\4\l\p\d\g\i\t\w\w\u\m\q\7\y\l\f\u\c\r\s\w\l\6\0\f\y\5\l\b\3\g\n\f\x\o\y\g\4\g\8\n\g\h\c\h\8\5\2\7\4\t\x\w\x\g\w\i\9\t\r\z\l\b\7\f\g\4\2\8\9\6\r\8\8\m\d\y\u\f\p\b\w\w\s\u\m\6\y\5\f\p\k\u\b\l\s\h\5\z\p\k\y\k\w\o\1\y\c\a\x\z\q\a\1\0\f\d\e\3\3\t\v\w\i\6\r\f\2\5\b\z\b\h\j\6\v\3\i\t\q\e\4\e\d\8\e\5\o\b\5\6\3\9\v\x\a\n\h\u\l\3\u\x\w ]] 00:07:07.543 00:07:07.543 real 0m1.712s 00:07:07.543 user 0m0.954s 00:07:07.543 sys 0m0.417s 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.543 ************************************ 00:07:07.543 END TEST dd_flag_nofollow_forced_aio 00:07:07.543 ************************************ 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:07.543 ************************************ 00:07:07.543 START TEST dd_flag_noatime_forced_aio 00:07:07.543 ************************************ 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721078440 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721078440 00:07:07.543 21:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:08.921 21:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.921 [2024-07-15 21:20:41.904501] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:08.921 [2024-07-15 21:20:41.904573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63366 ] 00:07:08.921 [2024-07-15 21:20:42.046210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.922 [2024-07-15 21:20:42.137777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.922 [2024-07-15 21:20:42.178213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.181  Copying: 512/512 [B] (average 500 kBps) 00:07:09.181 00:07:09.181 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.181 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721078440 )) 00:07:09.181 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.181 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721078440 )) 00:07:09.181 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.181 [2024-07-15 21:20:42.462254] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:09.181 [2024-07-15 21:20:42.462319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63378 ] 00:07:09.439 [2024-07-15 21:20:42.604263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.439 [2024-07-15 21:20:42.697727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.439 [2024-07-15 21:20:42.738314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.698  Copying: 512/512 [B] (average 500 kBps) 00:07:09.698 00:07:09.698 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.698 ************************************ 00:07:09.698 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721078442 )) 00:07:09.698 00:07:09.698 real 0m2.158s 00:07:09.698 user 0m0.625s 00:07:09.698 sys 0m0.284s 00:07:09.698 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.698 21:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 END TEST dd_flag_noatime_forced_aio 00:07:09.698 ************************************ 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 ************************************ 00:07:09.698 START TEST dd_flags_misc_forced_aio 00:07:09.698 ************************************ 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:09.698 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:09.957 [2024-07-15 21:20:43.116069] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:09.957 [2024-07-15 21:20:43.116275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63404 ] 00:07:09.957 [2024-07-15 21:20:43.257185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.216 [2024-07-15 21:20:43.350382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.216 [2024-07-15 21:20:43.391422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.475  Copying: 512/512 [B] (average 500 kBps) 00:07:10.475 00:07:10.476 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i9a49qxg8bdmtc1p3pwv5uvmspwrgdw25e88vipmkxv7i70qloqro1oc8bly8r78u1clp9jpdt5k1vddf48z1u20yn259hlu6ipuj5uhjg4qpojgprpvg2ia36uj06ptf0isdfcua3gqq9oj5rtbnm8fsivcswdwf2oxmaac9ivbqb5khh81gsdmbyrdm9r90ol4awgzhvxs0vduw9jlmlknrhurwah0gh2qdi1j04uhkt0upxqa7pvv3sqw8bi9td0jhzfinliyll6waxl3yra8u80p3tjx9ikmz0efenz9jk2rqzg39f4nu9m2tj6ju5nm7wnft5rihgdzfy9hp0ypa6wrnm7oov38mqnqyn42ulwjcn0t2gl2cg3h6w4302mu13m4youccxqayqeg55re3mlgof009e4b9xeqyuwup9b1tws7prkb4061w4mx9lkegungvljotdnutqqkx68uk743fplh4pq4o6uqfiskn4rbpe8eyvutiomew4gj == \i\9\a\4\9\q\x\g\8\b\d\m\t\c\1\p\3\p\w\v\5\u\v\m\s\p\w\r\g\d\w\2\5\e\8\8\v\i\p\m\k\x\v\7\i\7\0\q\l\o\q\r\o\1\o\c\8\b\l\y\8\r\7\8\u\1\c\l\p\9\j\p\d\t\5\k\1\v\d\d\f\4\8\z\1\u\2\0\y\n\2\5\9\h\l\u\6\i\p\u\j\5\u\h\j\g\4\q\p\o\j\g\p\r\p\v\g\2\i\a\3\6\u\j\0\6\p\t\f\0\i\s\d\f\c\u\a\3\g\q\q\9\o\j\5\r\t\b\n\m\8\f\s\i\v\c\s\w\d\w\f\2\o\x\m\a\a\c\9\i\v\b\q\b\5\k\h\h\8\1\g\s\d\m\b\y\r\d\m\9\r\9\0\o\l\4\a\w\g\z\h\v\x\s\0\v\d\u\w\9\j\l\m\l\k\n\r\h\u\r\w\a\h\0\g\h\2\q\d\i\1\j\0\4\u\h\k\t\0\u\p\x\q\a\7\p\v\v\3\s\q\w\8\b\i\9\t\d\0\j\h\z\f\i\n\l\i\y\l\l\6\w\a\x\l\3\y\r\a\8\u\8\0\p\3\t\j\x\9\i\k\m\z\0\e\f\e\n\z\9\j\k\2\r\q\z\g\3\9\f\4\n\u\9\m\2\t\j\6\j\u\5\n\m\7\w\n\f\t\5\r\i\h\g\d\z\f\y\9\h\p\0\y\p\a\6\w\r\n\m\7\o\o\v\3\8\m\q\n\q\y\n\4\2\u\l\w\j\c\n\0\t\2\g\l\2\c\g\3\h\6\w\4\3\0\2\m\u\1\3\m\4\y\o\u\c\c\x\q\a\y\q\e\g\5\5\r\e\3\m\l\g\o\f\0\0\9\e\4\b\9\x\e\q\y\u\w\u\p\9\b\1\t\w\s\7\p\r\k\b\4\0\6\1\w\4\m\x\9\l\k\e\g\u\n\g\v\l\j\o\t\d\n\u\t\q\q\k\x\6\8\u\k\7\4\3\f\p\l\h\4\p\q\4\o\6\u\q\f\i\s\k\n\4\r\b\p\e\8\e\y\v\u\t\i\o\m\e\w\4\g\j ]] 00:07:10.476 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.476 21:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:10.476 [2024-07-15 21:20:43.664270] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:10.476 [2024-07-15 21:20:43.664347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63412 ] 00:07:10.476 [2024-07-15 21:20:43.805428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.735 [2024-07-15 21:20:43.898098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.735 [2024-07-15 21:20:43.939682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.994  Copying: 512/512 [B] (average 500 kBps) 00:07:10.994 00:07:10.994 21:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i9a49qxg8bdmtc1p3pwv5uvmspwrgdw25e88vipmkxv7i70qloqro1oc8bly8r78u1clp9jpdt5k1vddf48z1u20yn259hlu6ipuj5uhjg4qpojgprpvg2ia36uj06ptf0isdfcua3gqq9oj5rtbnm8fsivcswdwf2oxmaac9ivbqb5khh81gsdmbyrdm9r90ol4awgzhvxs0vduw9jlmlknrhurwah0gh2qdi1j04uhkt0upxqa7pvv3sqw8bi9td0jhzfinliyll6waxl3yra8u80p3tjx9ikmz0efenz9jk2rqzg39f4nu9m2tj6ju5nm7wnft5rihgdzfy9hp0ypa6wrnm7oov38mqnqyn42ulwjcn0t2gl2cg3h6w4302mu13m4youccxqayqeg55re3mlgof009e4b9xeqyuwup9b1tws7prkb4061w4mx9lkegungvljotdnutqqkx68uk743fplh4pq4o6uqfiskn4rbpe8eyvutiomew4gj == \i\9\a\4\9\q\x\g\8\b\d\m\t\c\1\p\3\p\w\v\5\u\v\m\s\p\w\r\g\d\w\2\5\e\8\8\v\i\p\m\k\x\v\7\i\7\0\q\l\o\q\r\o\1\o\c\8\b\l\y\8\r\7\8\u\1\c\l\p\9\j\p\d\t\5\k\1\v\d\d\f\4\8\z\1\u\2\0\y\n\2\5\9\h\l\u\6\i\p\u\j\5\u\h\j\g\4\q\p\o\j\g\p\r\p\v\g\2\i\a\3\6\u\j\0\6\p\t\f\0\i\s\d\f\c\u\a\3\g\q\q\9\o\j\5\r\t\b\n\m\8\f\s\i\v\c\s\w\d\w\f\2\o\x\m\a\a\c\9\i\v\b\q\b\5\k\h\h\8\1\g\s\d\m\b\y\r\d\m\9\r\9\0\o\l\4\a\w\g\z\h\v\x\s\0\v\d\u\w\9\j\l\m\l\k\n\r\h\u\r\w\a\h\0\g\h\2\q\d\i\1\j\0\4\u\h\k\t\0\u\p\x\q\a\7\p\v\v\3\s\q\w\8\b\i\9\t\d\0\j\h\z\f\i\n\l\i\y\l\l\6\w\a\x\l\3\y\r\a\8\u\8\0\p\3\t\j\x\9\i\k\m\z\0\e\f\e\n\z\9\j\k\2\r\q\z\g\3\9\f\4\n\u\9\m\2\t\j\6\j\u\5\n\m\7\w\n\f\t\5\r\i\h\g\d\z\f\y\9\h\p\0\y\p\a\6\w\r\n\m\7\o\o\v\3\8\m\q\n\q\y\n\4\2\u\l\w\j\c\n\0\t\2\g\l\2\c\g\3\h\6\w\4\3\0\2\m\u\1\3\m\4\y\o\u\c\c\x\q\a\y\q\e\g\5\5\r\e\3\m\l\g\o\f\0\0\9\e\4\b\9\x\e\q\y\u\w\u\p\9\b\1\t\w\s\7\p\r\k\b\4\0\6\1\w\4\m\x\9\l\k\e\g\u\n\g\v\l\j\o\t\d\n\u\t\q\q\k\x\6\8\u\k\7\4\3\f\p\l\h\4\p\q\4\o\6\u\q\f\i\s\k\n\4\r\b\p\e\8\e\y\v\u\t\i\o\m\e\w\4\g\j ]] 00:07:10.994 21:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.995 21:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:10.995 [2024-07-15 21:20:44.213911] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:10.995 [2024-07-15 21:20:44.213980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63419 ] 00:07:10.995 [2024-07-15 21:20:44.354960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.254 [2024-07-15 21:20:44.452368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.254 [2024-07-15 21:20:44.494777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.772  Copying: 512/512 [B] (average 2295 Bps) 00:07:11.772 00:07:11.772 21:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i9a49qxg8bdmtc1p3pwv5uvmspwrgdw25e88vipmkxv7i70qloqro1oc8bly8r78u1clp9jpdt5k1vddf48z1u20yn259hlu6ipuj5uhjg4qpojgprpvg2ia36uj06ptf0isdfcua3gqq9oj5rtbnm8fsivcswdwf2oxmaac9ivbqb5khh81gsdmbyrdm9r90ol4awgzhvxs0vduw9jlmlknrhurwah0gh2qdi1j04uhkt0upxqa7pvv3sqw8bi9td0jhzfinliyll6waxl3yra8u80p3tjx9ikmz0efenz9jk2rqzg39f4nu9m2tj6ju5nm7wnft5rihgdzfy9hp0ypa6wrnm7oov38mqnqyn42ulwjcn0t2gl2cg3h6w4302mu13m4youccxqayqeg55re3mlgof009e4b9xeqyuwup9b1tws7prkb4061w4mx9lkegungvljotdnutqqkx68uk743fplh4pq4o6uqfiskn4rbpe8eyvutiomew4gj == \i\9\a\4\9\q\x\g\8\b\d\m\t\c\1\p\3\p\w\v\5\u\v\m\s\p\w\r\g\d\w\2\5\e\8\8\v\i\p\m\k\x\v\7\i\7\0\q\l\o\q\r\o\1\o\c\8\b\l\y\8\r\7\8\u\1\c\l\p\9\j\p\d\t\5\k\1\v\d\d\f\4\8\z\1\u\2\0\y\n\2\5\9\h\l\u\6\i\p\u\j\5\u\h\j\g\4\q\p\o\j\g\p\r\p\v\g\2\i\a\3\6\u\j\0\6\p\t\f\0\i\s\d\f\c\u\a\3\g\q\q\9\o\j\5\r\t\b\n\m\8\f\s\i\v\c\s\w\d\w\f\2\o\x\m\a\a\c\9\i\v\b\q\b\5\k\h\h\8\1\g\s\d\m\b\y\r\d\m\9\r\9\0\o\l\4\a\w\g\z\h\v\x\s\0\v\d\u\w\9\j\l\m\l\k\n\r\h\u\r\w\a\h\0\g\h\2\q\d\i\1\j\0\4\u\h\k\t\0\u\p\x\q\a\7\p\v\v\3\s\q\w\8\b\i\9\t\d\0\j\h\z\f\i\n\l\i\y\l\l\6\w\a\x\l\3\y\r\a\8\u\8\0\p\3\t\j\x\9\i\k\m\z\0\e\f\e\n\z\9\j\k\2\r\q\z\g\3\9\f\4\n\u\9\m\2\t\j\6\j\u\5\n\m\7\w\n\f\t\5\r\i\h\g\d\z\f\y\9\h\p\0\y\p\a\6\w\r\n\m\7\o\o\v\3\8\m\q\n\q\y\n\4\2\u\l\w\j\c\n\0\t\2\g\l\2\c\g\3\h\6\w\4\3\0\2\m\u\1\3\m\4\y\o\u\c\c\x\q\a\y\q\e\g\5\5\r\e\3\m\l\g\o\f\0\0\9\e\4\b\9\x\e\q\y\u\w\u\p\9\b\1\t\w\s\7\p\r\k\b\4\0\6\1\w\4\m\x\9\l\k\e\g\u\n\g\v\l\j\o\t\d\n\u\t\q\q\k\x\6\8\u\k\7\4\3\f\p\l\h\4\p\q\4\o\6\u\q\f\i\s\k\n\4\r\b\p\e\8\e\y\v\u\t\i\o\m\e\w\4\g\j ]] 00:07:11.772 21:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.772 21:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:11.772 [2024-07-15 21:20:44.999012] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:11.772 [2024-07-15 21:20:44.999081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63428 ] 00:07:11.772 [2024-07-15 21:20:45.139920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.030 [2024-07-15 21:20:45.237632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.030 [2024-07-15 21:20:45.278527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.287  Copying: 512/512 [B] (average 125 kBps) 00:07:12.287 00:07:12.288 21:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i9a49qxg8bdmtc1p3pwv5uvmspwrgdw25e88vipmkxv7i70qloqro1oc8bly8r78u1clp9jpdt5k1vddf48z1u20yn259hlu6ipuj5uhjg4qpojgprpvg2ia36uj06ptf0isdfcua3gqq9oj5rtbnm8fsivcswdwf2oxmaac9ivbqb5khh81gsdmbyrdm9r90ol4awgzhvxs0vduw9jlmlknrhurwah0gh2qdi1j04uhkt0upxqa7pvv3sqw8bi9td0jhzfinliyll6waxl3yra8u80p3tjx9ikmz0efenz9jk2rqzg39f4nu9m2tj6ju5nm7wnft5rihgdzfy9hp0ypa6wrnm7oov38mqnqyn42ulwjcn0t2gl2cg3h6w4302mu13m4youccxqayqeg55re3mlgof009e4b9xeqyuwup9b1tws7prkb4061w4mx9lkegungvljotdnutqqkx68uk743fplh4pq4o6uqfiskn4rbpe8eyvutiomew4gj == \i\9\a\4\9\q\x\g\8\b\d\m\t\c\1\p\3\p\w\v\5\u\v\m\s\p\w\r\g\d\w\2\5\e\8\8\v\i\p\m\k\x\v\7\i\7\0\q\l\o\q\r\o\1\o\c\8\b\l\y\8\r\7\8\u\1\c\l\p\9\j\p\d\t\5\k\1\v\d\d\f\4\8\z\1\u\2\0\y\n\2\5\9\h\l\u\6\i\p\u\j\5\u\h\j\g\4\q\p\o\j\g\p\r\p\v\g\2\i\a\3\6\u\j\0\6\p\t\f\0\i\s\d\f\c\u\a\3\g\q\q\9\o\j\5\r\t\b\n\m\8\f\s\i\v\c\s\w\d\w\f\2\o\x\m\a\a\c\9\i\v\b\q\b\5\k\h\h\8\1\g\s\d\m\b\y\r\d\m\9\r\9\0\o\l\4\a\w\g\z\h\v\x\s\0\v\d\u\w\9\j\l\m\l\k\n\r\h\u\r\w\a\h\0\g\h\2\q\d\i\1\j\0\4\u\h\k\t\0\u\p\x\q\a\7\p\v\v\3\s\q\w\8\b\i\9\t\d\0\j\h\z\f\i\n\l\i\y\l\l\6\w\a\x\l\3\y\r\a\8\u\8\0\p\3\t\j\x\9\i\k\m\z\0\e\f\e\n\z\9\j\k\2\r\q\z\g\3\9\f\4\n\u\9\m\2\t\j\6\j\u\5\n\m\7\w\n\f\t\5\r\i\h\g\d\z\f\y\9\h\p\0\y\p\a\6\w\r\n\m\7\o\o\v\3\8\m\q\n\q\y\n\4\2\u\l\w\j\c\n\0\t\2\g\l\2\c\g\3\h\6\w\4\3\0\2\m\u\1\3\m\4\y\o\u\c\c\x\q\a\y\q\e\g\5\5\r\e\3\m\l\g\o\f\0\0\9\e\4\b\9\x\e\q\y\u\w\u\p\9\b\1\t\w\s\7\p\r\k\b\4\0\6\1\w\4\m\x\9\l\k\e\g\u\n\g\v\l\j\o\t\d\n\u\t\q\q\k\x\6\8\u\k\7\4\3\f\p\l\h\4\p\q\4\o\6\u\q\f\i\s\k\n\4\r\b\p\e\8\e\y\v\u\t\i\o\m\e\w\4\g\j ]] 00:07:12.288 21:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.288 21:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.288 21:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:12.288 21:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:12.288 21:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.288 21:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.288 [2024-07-15 21:20:45.570038] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:12.288 [2024-07-15 21:20:45.570113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63440 ] 00:07:12.545 [2024-07-15 21:20:45.711548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.545 [2024-07-15 21:20:45.808390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.545 [2024-07-15 21:20:45.849293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.803  Copying: 512/512 [B] (average 500 kBps) 00:07:12.803 00:07:12.803 21:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ljrcwsqwxog84vxhwhmuk2b98mrnoq2azocvfiolrd27v8flm3uifyj49nyy6us0o56b88ytzn0wf813i7drtfgw48fx13fa8b2a1gcnlms4b46yla1a2dw5w7kruh8zuj7qy4vdnjeb4zki6l4h6lvxhbk0estbawhsltgj4nqhpjeo1oi87obgmvjhz49iioe6704gjzcnofoahngjt5re7kgtwvxi82t7qxr021jg8jcmxi03aarp7pnto4gxpqaa9fhyb35vldr7xxpwno18c91wx4vuf0ripn78o61quoziv8lq2j4mxc6tel0ddnwk8o3m8e97z9fx79c0zdbvay5qmumozfsdl4rj9c383avv5dh6aocciece2s2bypf7pil0x7otdkduyjyxxzg97tqcgg67q8qs6fsc3bll5wu8mhmtvrb4iujwqapq1gp0rykcqhqbyfpwa6hjsdxxzy0tpzrm2xbpi7vuakeb844sgsg2w5qqgirjwnt7 == \l\j\r\c\w\s\q\w\x\o\g\8\4\v\x\h\w\h\m\u\k\2\b\9\8\m\r\n\o\q\2\a\z\o\c\v\f\i\o\l\r\d\2\7\v\8\f\l\m\3\u\i\f\y\j\4\9\n\y\y\6\u\s\0\o\5\6\b\8\8\y\t\z\n\0\w\f\8\1\3\i\7\d\r\t\f\g\w\4\8\f\x\1\3\f\a\8\b\2\a\1\g\c\n\l\m\s\4\b\4\6\y\l\a\1\a\2\d\w\5\w\7\k\r\u\h\8\z\u\j\7\q\y\4\v\d\n\j\e\b\4\z\k\i\6\l\4\h\6\l\v\x\h\b\k\0\e\s\t\b\a\w\h\s\l\t\g\j\4\n\q\h\p\j\e\o\1\o\i\8\7\o\b\g\m\v\j\h\z\4\9\i\i\o\e\6\7\0\4\g\j\z\c\n\o\f\o\a\h\n\g\j\t\5\r\e\7\k\g\t\w\v\x\i\8\2\t\7\q\x\r\0\2\1\j\g\8\j\c\m\x\i\0\3\a\a\r\p\7\p\n\t\o\4\g\x\p\q\a\a\9\f\h\y\b\3\5\v\l\d\r\7\x\x\p\w\n\o\1\8\c\9\1\w\x\4\v\u\f\0\r\i\p\n\7\8\o\6\1\q\u\o\z\i\v\8\l\q\2\j\4\m\x\c\6\t\e\l\0\d\d\n\w\k\8\o\3\m\8\e\9\7\z\9\f\x\7\9\c\0\z\d\b\v\a\y\5\q\m\u\m\o\z\f\s\d\l\4\r\j\9\c\3\8\3\a\v\v\5\d\h\6\a\o\c\c\i\e\c\e\2\s\2\b\y\p\f\7\p\i\l\0\x\7\o\t\d\k\d\u\y\j\y\x\x\z\g\9\7\t\q\c\g\g\6\7\q\8\q\s\6\f\s\c\3\b\l\l\5\w\u\8\m\h\m\t\v\r\b\4\i\u\j\w\q\a\p\q\1\g\p\0\r\y\k\c\q\h\q\b\y\f\p\w\a\6\h\j\s\d\x\x\z\y\0\t\p\z\r\m\2\x\b\p\i\7\v\u\a\k\e\b\8\4\4\s\g\s\g\2\w\5\q\q\g\i\r\j\w\n\t\7 ]] 00:07:12.803 21:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.803 21:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:12.803 [2024-07-15 21:20:46.119190] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:12.803 [2024-07-15 21:20:46.119261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63446 ] 00:07:13.061 [2024-07-15 21:20:46.259666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.061 [2024-07-15 21:20:46.355024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.061 [2024-07-15 21:20:46.395920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.319  Copying: 512/512 [B] (average 500 kBps) 00:07:13.319 00:07:13.319 21:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ljrcwsqwxog84vxhwhmuk2b98mrnoq2azocvfiolrd27v8flm3uifyj49nyy6us0o56b88ytzn0wf813i7drtfgw48fx13fa8b2a1gcnlms4b46yla1a2dw5w7kruh8zuj7qy4vdnjeb4zki6l4h6lvxhbk0estbawhsltgj4nqhpjeo1oi87obgmvjhz49iioe6704gjzcnofoahngjt5re7kgtwvxi82t7qxr021jg8jcmxi03aarp7pnto4gxpqaa9fhyb35vldr7xxpwno18c91wx4vuf0ripn78o61quoziv8lq2j4mxc6tel0ddnwk8o3m8e97z9fx79c0zdbvay5qmumozfsdl4rj9c383avv5dh6aocciece2s2bypf7pil0x7otdkduyjyxxzg97tqcgg67q8qs6fsc3bll5wu8mhmtvrb4iujwqapq1gp0rykcqhqbyfpwa6hjsdxxzy0tpzrm2xbpi7vuakeb844sgsg2w5qqgirjwnt7 == \l\j\r\c\w\s\q\w\x\o\g\8\4\v\x\h\w\h\m\u\k\2\b\9\8\m\r\n\o\q\2\a\z\o\c\v\f\i\o\l\r\d\2\7\v\8\f\l\m\3\u\i\f\y\j\4\9\n\y\y\6\u\s\0\o\5\6\b\8\8\y\t\z\n\0\w\f\8\1\3\i\7\d\r\t\f\g\w\4\8\f\x\1\3\f\a\8\b\2\a\1\g\c\n\l\m\s\4\b\4\6\y\l\a\1\a\2\d\w\5\w\7\k\r\u\h\8\z\u\j\7\q\y\4\v\d\n\j\e\b\4\z\k\i\6\l\4\h\6\l\v\x\h\b\k\0\e\s\t\b\a\w\h\s\l\t\g\j\4\n\q\h\p\j\e\o\1\o\i\8\7\o\b\g\m\v\j\h\z\4\9\i\i\o\e\6\7\0\4\g\j\z\c\n\o\f\o\a\h\n\g\j\t\5\r\e\7\k\g\t\w\v\x\i\8\2\t\7\q\x\r\0\2\1\j\g\8\j\c\m\x\i\0\3\a\a\r\p\7\p\n\t\o\4\g\x\p\q\a\a\9\f\h\y\b\3\5\v\l\d\r\7\x\x\p\w\n\o\1\8\c\9\1\w\x\4\v\u\f\0\r\i\p\n\7\8\o\6\1\q\u\o\z\i\v\8\l\q\2\j\4\m\x\c\6\t\e\l\0\d\d\n\w\k\8\o\3\m\8\e\9\7\z\9\f\x\7\9\c\0\z\d\b\v\a\y\5\q\m\u\m\o\z\f\s\d\l\4\r\j\9\c\3\8\3\a\v\v\5\d\h\6\a\o\c\c\i\e\c\e\2\s\2\b\y\p\f\7\p\i\l\0\x\7\o\t\d\k\d\u\y\j\y\x\x\z\g\9\7\t\q\c\g\g\6\7\q\8\q\s\6\f\s\c\3\b\l\l\5\w\u\8\m\h\m\t\v\r\b\4\i\u\j\w\q\a\p\q\1\g\p\0\r\y\k\c\q\h\q\b\y\f\p\w\a\6\h\j\s\d\x\x\z\y\0\t\p\z\r\m\2\x\b\p\i\7\v\u\a\k\e\b\8\4\4\s\g\s\g\2\w\5\q\q\g\i\r\j\w\n\t\7 ]] 00:07:13.319 21:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.319 21:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.319 [2024-07-15 21:20:46.683792] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:13.319 [2024-07-15 21:20:46.683881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63455 ] 00:07:13.576 [2024-07-15 21:20:46.822060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.576 [2024-07-15 21:20:46.918993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.834 [2024-07-15 21:20:46.960048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.834  Copying: 512/512 [B] (average 500 kBps) 00:07:13.834 00:07:13.834 21:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ljrcwsqwxog84vxhwhmuk2b98mrnoq2azocvfiolrd27v8flm3uifyj49nyy6us0o56b88ytzn0wf813i7drtfgw48fx13fa8b2a1gcnlms4b46yla1a2dw5w7kruh8zuj7qy4vdnjeb4zki6l4h6lvxhbk0estbawhsltgj4nqhpjeo1oi87obgmvjhz49iioe6704gjzcnofoahngjt5re7kgtwvxi82t7qxr021jg8jcmxi03aarp7pnto4gxpqaa9fhyb35vldr7xxpwno18c91wx4vuf0ripn78o61quoziv8lq2j4mxc6tel0ddnwk8o3m8e97z9fx79c0zdbvay5qmumozfsdl4rj9c383avv5dh6aocciece2s2bypf7pil0x7otdkduyjyxxzg97tqcgg67q8qs6fsc3bll5wu8mhmtvrb4iujwqapq1gp0rykcqhqbyfpwa6hjsdxxzy0tpzrm2xbpi7vuakeb844sgsg2w5qqgirjwnt7 == \l\j\r\c\w\s\q\w\x\o\g\8\4\v\x\h\w\h\m\u\k\2\b\9\8\m\r\n\o\q\2\a\z\o\c\v\f\i\o\l\r\d\2\7\v\8\f\l\m\3\u\i\f\y\j\4\9\n\y\y\6\u\s\0\o\5\6\b\8\8\y\t\z\n\0\w\f\8\1\3\i\7\d\r\t\f\g\w\4\8\f\x\1\3\f\a\8\b\2\a\1\g\c\n\l\m\s\4\b\4\6\y\l\a\1\a\2\d\w\5\w\7\k\r\u\h\8\z\u\j\7\q\y\4\v\d\n\j\e\b\4\z\k\i\6\l\4\h\6\l\v\x\h\b\k\0\e\s\t\b\a\w\h\s\l\t\g\j\4\n\q\h\p\j\e\o\1\o\i\8\7\o\b\g\m\v\j\h\z\4\9\i\i\o\e\6\7\0\4\g\j\z\c\n\o\f\o\a\h\n\g\j\t\5\r\e\7\k\g\t\w\v\x\i\8\2\t\7\q\x\r\0\2\1\j\g\8\j\c\m\x\i\0\3\a\a\r\p\7\p\n\t\o\4\g\x\p\q\a\a\9\f\h\y\b\3\5\v\l\d\r\7\x\x\p\w\n\o\1\8\c\9\1\w\x\4\v\u\f\0\r\i\p\n\7\8\o\6\1\q\u\o\z\i\v\8\l\q\2\j\4\m\x\c\6\t\e\l\0\d\d\n\w\k\8\o\3\m\8\e\9\7\z\9\f\x\7\9\c\0\z\d\b\v\a\y\5\q\m\u\m\o\z\f\s\d\l\4\r\j\9\c\3\8\3\a\v\v\5\d\h\6\a\o\c\c\i\e\c\e\2\s\2\b\y\p\f\7\p\i\l\0\x\7\o\t\d\k\d\u\y\j\y\x\x\z\g\9\7\t\q\c\g\g\6\7\q\8\q\s\6\f\s\c\3\b\l\l\5\w\u\8\m\h\m\t\v\r\b\4\i\u\j\w\q\a\p\q\1\g\p\0\r\y\k\c\q\h\q\b\y\f\p\w\a\6\h\j\s\d\x\x\z\y\0\t\p\z\r\m\2\x\b\p\i\7\v\u\a\k\e\b\8\4\4\s\g\s\g\2\w\5\q\q\g\i\r\j\w\n\t\7 ]] 00:07:13.834 21:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.834 21:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.091 [2024-07-15 21:20:47.239882] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:14.091 [2024-07-15 21:20:47.239951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63462 ] 00:07:14.091 [2024-07-15 21:20:47.381794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.348 [2024-07-15 21:20:47.477122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.348 [2024-07-15 21:20:47.517586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.605  Copying: 512/512 [B] (average 250 kBps) 00:07:14.605 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ljrcwsqwxog84vxhwhmuk2b98mrnoq2azocvfiolrd27v8flm3uifyj49nyy6us0o56b88ytzn0wf813i7drtfgw48fx13fa8b2a1gcnlms4b46yla1a2dw5w7kruh8zuj7qy4vdnjeb4zki6l4h6lvxhbk0estbawhsltgj4nqhpjeo1oi87obgmvjhz49iioe6704gjzcnofoahngjt5re7kgtwvxi82t7qxr021jg8jcmxi03aarp7pnto4gxpqaa9fhyb35vldr7xxpwno18c91wx4vuf0ripn78o61quoziv8lq2j4mxc6tel0ddnwk8o3m8e97z9fx79c0zdbvay5qmumozfsdl4rj9c383avv5dh6aocciece2s2bypf7pil0x7otdkduyjyxxzg97tqcgg67q8qs6fsc3bll5wu8mhmtvrb4iujwqapq1gp0rykcqhqbyfpwa6hjsdxxzy0tpzrm2xbpi7vuakeb844sgsg2w5qqgirjwnt7 == \l\j\r\c\w\s\q\w\x\o\g\8\4\v\x\h\w\h\m\u\k\2\b\9\8\m\r\n\o\q\2\a\z\o\c\v\f\i\o\l\r\d\2\7\v\8\f\l\m\3\u\i\f\y\j\4\9\n\y\y\6\u\s\0\o\5\6\b\8\8\y\t\z\n\0\w\f\8\1\3\i\7\d\r\t\f\g\w\4\8\f\x\1\3\f\a\8\b\2\a\1\g\c\n\l\m\s\4\b\4\6\y\l\a\1\a\2\d\w\5\w\7\k\r\u\h\8\z\u\j\7\q\y\4\v\d\n\j\e\b\4\z\k\i\6\l\4\h\6\l\v\x\h\b\k\0\e\s\t\b\a\w\h\s\l\t\g\j\4\n\q\h\p\j\e\o\1\o\i\8\7\o\b\g\m\v\j\h\z\4\9\i\i\o\e\6\7\0\4\g\j\z\c\n\o\f\o\a\h\n\g\j\t\5\r\e\7\k\g\t\w\v\x\i\8\2\t\7\q\x\r\0\2\1\j\g\8\j\c\m\x\i\0\3\a\a\r\p\7\p\n\t\o\4\g\x\p\q\a\a\9\f\h\y\b\3\5\v\l\d\r\7\x\x\p\w\n\o\1\8\c\9\1\w\x\4\v\u\f\0\r\i\p\n\7\8\o\6\1\q\u\o\z\i\v\8\l\q\2\j\4\m\x\c\6\t\e\l\0\d\d\n\w\k\8\o\3\m\8\e\9\7\z\9\f\x\7\9\c\0\z\d\b\v\a\y\5\q\m\u\m\o\z\f\s\d\l\4\r\j\9\c\3\8\3\a\v\v\5\d\h\6\a\o\c\c\i\e\c\e\2\s\2\b\y\p\f\7\p\i\l\0\x\7\o\t\d\k\d\u\y\j\y\x\x\z\g\9\7\t\q\c\g\g\6\7\q\8\q\s\6\f\s\c\3\b\l\l\5\w\u\8\m\h\m\t\v\r\b\4\i\u\j\w\q\a\p\q\1\g\p\0\r\y\k\c\q\h\q\b\y\f\p\w\a\6\h\j\s\d\x\x\z\y\0\t\p\z\r\m\2\x\b\p\i\7\v\u\a\k\e\b\8\4\4\s\g\s\g\2\w\5\q\q\g\i\r\j\w\n\t\7 ]] 00:07:14.605 00:07:14.605 real 0m4.689s 00:07:14.605 user 0m2.463s 00:07:14.605 sys 0m1.017s 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:14.605 ************************************ 00:07:14.605 END TEST dd_flags_misc_forced_aio 00:07:14.605 ************************************ 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:14.605 ************************************ 00:07:14.605 END TEST spdk_dd_posix 00:07:14.605 ************************************ 00:07:14.605 00:07:14.605 real 0m20.986s 00:07:14.605 user 0m10.310s 00:07:14.605 sys 0m6.131s 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.605 21:20:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:14.605 21:20:47 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:14.605 21:20:47 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:14.605 21:20:47 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.605 21:20:47 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.605 21:20:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.605 ************************************ 00:07:14.605 START TEST spdk_dd_malloc 00:07:14.605 ************************************ 00:07:14.605 21:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:14.862 * Looking for test storage... 00:07:14.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.862 21:20:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.862 21:20:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.862 21:20:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.862 21:20:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.863 21:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:14.863 ************************************ 00:07:14.863 START TEST dd_malloc_copy 00:07:14.863 ************************************ 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:14.863 21:20:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.863 { 00:07:14.863 "subsystems": [ 00:07:14.863 { 00:07:14.863 "subsystem": "bdev", 00:07:14.863 "config": [ 00:07:14.863 { 00:07:14.863 "params": { 00:07:14.863 "block_size": 512, 00:07:14.863 "num_blocks": 1048576, 00:07:14.863 "name": "malloc0" 00:07:14.863 }, 00:07:14.863 "method": "bdev_malloc_create" 00:07:14.863 }, 00:07:14.863 { 00:07:14.863 "params": { 00:07:14.863 "block_size": 512, 00:07:14.863 "num_blocks": 1048576, 00:07:14.863 "name": "malloc1" 00:07:14.863 }, 00:07:14.863 "method": "bdev_malloc_create" 00:07:14.863 }, 00:07:14.863 { 00:07:14.863 "method": "bdev_wait_for_examine" 00:07:14.863 } 00:07:14.863 ] 00:07:14.863 } 00:07:14.863 ] 00:07:14.863 } 00:07:14.863 [2024-07-15 21:20:48.062751] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:14.863 [2024-07-15 21:20:48.062813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63531 ] 00:07:14.863 [2024-07-15 21:20:48.203967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.120 [2024-07-15 21:20:48.299406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.120 [2024-07-15 21:20:48.340663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.998  Copying: 256/512 [MB] (256 MBps) Copying: 510/512 [MB] (253 MBps) Copying: 512/512 [MB] (average 255 MBps) 00:07:17.998 00:07:17.998 21:20:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:17.998 21:20:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:17.998 21:20:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:17.998 21:20:51 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.998 [2024-07-15 21:20:51.139051] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:17.998 [2024-07-15 21:20:51.139126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63573 ] 00:07:17.998 { 00:07:17.998 "subsystems": [ 00:07:17.998 { 00:07:17.998 "subsystem": "bdev", 00:07:17.998 "config": [ 00:07:17.998 { 00:07:17.998 "params": { 00:07:17.998 "block_size": 512, 00:07:17.998 "num_blocks": 1048576, 00:07:17.998 "name": "malloc0" 00:07:17.998 }, 00:07:17.998 "method": "bdev_malloc_create" 00:07:17.998 }, 00:07:17.998 { 00:07:17.998 "params": { 00:07:17.998 "block_size": 512, 00:07:17.998 "num_blocks": 1048576, 00:07:17.998 "name": "malloc1" 00:07:17.998 }, 00:07:17.998 "method": "bdev_malloc_create" 00:07:17.998 }, 00:07:17.998 { 00:07:17.998 "method": "bdev_wait_for_examine" 00:07:17.998 } 00:07:17.998 ] 00:07:17.998 } 00:07:17.998 ] 00:07:17.998 } 00:07:17.998 [2024-07-15 21:20:51.278926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.257 [2024-07-15 21:20:51.377139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.257 [2024-07-15 21:20:51.419104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.864  Copying: 258/512 [MB] (258 MBps) Copying: 512/512 [MB] (average 258 MBps) 00:07:20.864 00:07:20.864 00:07:20.864 real 0m6.133s 00:07:20.864 user 0m5.283s 00:07:20.864 sys 0m0.685s 00:07:20.864 ************************************ 00:07:20.864 END TEST dd_malloc_copy 00:07:20.864 ************************************ 00:07:20.864 21:20:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.864 21:20:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.864 21:20:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:20.864 ************************************ 00:07:20.864 END TEST spdk_dd_malloc 00:07:20.864 ************************************ 00:07:20.864 00:07:20.864 real 0m6.334s 00:07:20.864 user 0m5.360s 00:07:20.864 sys 0m0.813s 00:07:20.864 21:20:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.864 21:20:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:21.124 21:20:54 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:21.124 21:20:54 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:21.124 21:20:54 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:21.124 21:20:54 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.124 21:20:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:21.124 ************************************ 00:07:21.124 START TEST spdk_dd_bdev_to_bdev 00:07:21.124 ************************************ 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:21.124 * Looking for test storage... 00:07:21.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:21.124 ************************************ 00:07:21.124 START TEST dd_inflate_file 00:07:21.124 ************************************ 00:07:21.124 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:21.124 [2024-07-15 21:20:54.462362] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:21.125 [2024-07-15 21:20:54.462425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63672 ] 00:07:21.384 [2024-07-15 21:20:54.599882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.384 [2024-07-15 21:20:54.692159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.384 [2024-07-15 21:20:54.733574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.643  Copying: 64/64 [MB] (average 1488 MBps) 00:07:21.643 00:07:21.643 00:07:21.643 real 0m0.558s 00:07:21.643 user 0m0.332s 00:07:21.643 sys 0m0.273s 00:07:21.643 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.643 ************************************ 00:07:21.643 END TEST dd_inflate_file 00:07:21.643 ************************************ 00:07:21.643 21:20:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:21.902 ************************************ 00:07:21.902 START TEST dd_copy_to_out_bdev 00:07:21.902 ************************************ 00:07:21.902 21:20:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:21.902 { 00:07:21.902 "subsystems": [ 00:07:21.902 { 00:07:21.902 "subsystem": "bdev", 00:07:21.902 "config": [ 00:07:21.902 { 00:07:21.902 "params": { 00:07:21.902 "trtype": "pcie", 00:07:21.902 "traddr": "0000:00:10.0", 00:07:21.902 "name": "Nvme0" 00:07:21.902 }, 00:07:21.902 "method": "bdev_nvme_attach_controller" 00:07:21.902 }, 00:07:21.902 { 00:07:21.902 "params": { 00:07:21.902 "trtype": "pcie", 00:07:21.902 "traddr": "0000:00:11.0", 00:07:21.902 "name": "Nvme1" 00:07:21.902 }, 00:07:21.902 "method": "bdev_nvme_attach_controller" 00:07:21.902 }, 00:07:21.902 { 00:07:21.902 "method": "bdev_wait_for_examine" 00:07:21.902 } 00:07:21.902 ] 00:07:21.902 } 00:07:21.902 ] 00:07:21.902 } 00:07:21.902 [2024-07-15 21:20:55.109962] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:21.902 [2024-07-15 21:20:55.110026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63711 ] 00:07:21.902 [2024-07-15 21:20:55.250694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.161 [2024-07-15 21:20:55.345590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.161 [2024-07-15 21:20:55.386719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.538  Copying: 55/64 [MB] (55 MBps) Copying: 64/64 [MB] (average 56 MBps) 00:07:23.538 00:07:23.797 00:07:23.797 real 0m1.851s 00:07:23.797 user 0m1.638s 00:07:23.797 sys 0m1.459s 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:23.797 ************************************ 00:07:23.797 END TEST dd_copy_to_out_bdev 00:07:23.797 ************************************ 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:23.797 ************************************ 00:07:23.797 START TEST dd_offset_magic 00:07:23.797 ************************************ 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:23.797 21:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:23.797 [2024-07-15 21:20:57.032561] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:23.797 [2024-07-15 21:20:57.032744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63756 ] 00:07:23.797 { 00:07:23.797 "subsystems": [ 00:07:23.797 { 00:07:23.797 "subsystem": "bdev", 00:07:23.797 "config": [ 00:07:23.797 { 00:07:23.797 "params": { 00:07:23.797 "trtype": "pcie", 00:07:23.797 "traddr": "0000:00:10.0", 00:07:23.797 "name": "Nvme0" 00:07:23.797 }, 00:07:23.797 "method": "bdev_nvme_attach_controller" 00:07:23.797 }, 00:07:23.797 { 00:07:23.797 "params": { 00:07:23.797 "trtype": "pcie", 00:07:23.797 "traddr": "0000:00:11.0", 00:07:23.797 "name": "Nvme1" 00:07:23.797 }, 00:07:23.797 "method": "bdev_nvme_attach_controller" 00:07:23.797 }, 00:07:23.797 { 00:07:23.797 "method": "bdev_wait_for_examine" 00:07:23.797 } 00:07:23.797 ] 00:07:23.797 } 00:07:23.797 ] 00:07:23.797 } 00:07:24.056 [2024-07-15 21:20:57.172758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.056 [2024-07-15 21:20:57.264928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.056 [2024-07-15 21:20:57.306320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.575  Copying: 65/65 [MB] (average 738 MBps) 00:07:24.575 00:07:24.575 21:20:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:24.575 21:20:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:24.575 21:20:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:24.575 21:20:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:24.575 [2024-07-15 21:20:57.841438] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:24.575 [2024-07-15 21:20:57.841628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63766 ] 00:07:24.575 { 00:07:24.575 "subsystems": [ 00:07:24.575 { 00:07:24.575 "subsystem": "bdev", 00:07:24.575 "config": [ 00:07:24.575 { 00:07:24.575 "params": { 00:07:24.575 "trtype": "pcie", 00:07:24.575 "traddr": "0000:00:10.0", 00:07:24.575 "name": "Nvme0" 00:07:24.575 }, 00:07:24.575 "method": "bdev_nvme_attach_controller" 00:07:24.575 }, 00:07:24.575 { 00:07:24.575 "params": { 00:07:24.575 "trtype": "pcie", 00:07:24.575 "traddr": "0000:00:11.0", 00:07:24.575 "name": "Nvme1" 00:07:24.575 }, 00:07:24.575 "method": "bdev_nvme_attach_controller" 00:07:24.575 }, 00:07:24.575 { 00:07:24.575 "method": "bdev_wait_for_examine" 00:07:24.575 } 00:07:24.575 ] 00:07:24.575 } 00:07:24.575 ] 00:07:24.575 } 00:07:24.834 [2024-07-15 21:20:57.983347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.834 [2024-07-15 21:20:58.077404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.834 [2024-07-15 21:20:58.118660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.093  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:25.093 00:07:25.352 21:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:25.352 21:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:25.352 21:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:25.352 21:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:25.352 21:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:25.352 21:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:25.352 21:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:25.352 [2024-07-15 21:20:58.518289] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:25.352 [2024-07-15 21:20:58.518356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63787 ] 00:07:25.352 { 00:07:25.352 "subsystems": [ 00:07:25.352 { 00:07:25.352 "subsystem": "bdev", 00:07:25.352 "config": [ 00:07:25.352 { 00:07:25.352 "params": { 00:07:25.352 "trtype": "pcie", 00:07:25.352 "traddr": "0000:00:10.0", 00:07:25.352 "name": "Nvme0" 00:07:25.352 }, 00:07:25.352 "method": "bdev_nvme_attach_controller" 00:07:25.352 }, 00:07:25.352 { 00:07:25.352 "params": { 00:07:25.352 "trtype": "pcie", 00:07:25.352 "traddr": "0000:00:11.0", 00:07:25.352 "name": "Nvme1" 00:07:25.352 }, 00:07:25.352 "method": "bdev_nvme_attach_controller" 00:07:25.352 }, 00:07:25.352 { 00:07:25.352 "method": "bdev_wait_for_examine" 00:07:25.352 } 00:07:25.352 ] 00:07:25.352 } 00:07:25.352 ] 00:07:25.352 } 00:07:25.352 [2024-07-15 21:20:58.659799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.611 [2024-07-15 21:20:58.749483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.611 [2024-07-15 21:20:58.790739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.129  Copying: 65/65 [MB] (average 833 MBps) 00:07:26.129 00:07:26.129 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:26.129 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:26.129 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:26.129 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:26.129 [2024-07-15 21:20:59.304698] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:26.129 [2024-07-15 21:20:59.304764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63807 ] 00:07:26.129 { 00:07:26.129 "subsystems": [ 00:07:26.129 { 00:07:26.129 "subsystem": "bdev", 00:07:26.129 "config": [ 00:07:26.129 { 00:07:26.129 "params": { 00:07:26.129 "trtype": "pcie", 00:07:26.129 "traddr": "0000:00:10.0", 00:07:26.129 "name": "Nvme0" 00:07:26.129 }, 00:07:26.129 "method": "bdev_nvme_attach_controller" 00:07:26.129 }, 00:07:26.129 { 00:07:26.129 "params": { 00:07:26.129 "trtype": "pcie", 00:07:26.129 "traddr": "0000:00:11.0", 00:07:26.129 "name": "Nvme1" 00:07:26.129 }, 00:07:26.129 "method": "bdev_nvme_attach_controller" 00:07:26.129 }, 00:07:26.129 { 00:07:26.129 "method": "bdev_wait_for_examine" 00:07:26.129 } 00:07:26.129 ] 00:07:26.129 } 00:07:26.129 ] 00:07:26.129 } 00:07:26.129 [2024-07-15 21:20:59.445231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.388 [2024-07-15 21:20:59.540605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.388 [2024-07-15 21:20:59.581702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.647  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:26.647 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:26.648 00:07:26.648 real 0m2.948s 00:07:26.648 user 0m2.153s 00:07:26.648 sys 0m0.824s 00:07:26.648 ************************************ 00:07:26.648 END TEST dd_offset_magic 00:07:26.648 ************************************ 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:26.648 21:20:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.906 [2024-07-15 21:21:00.040638] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:26.906 [2024-07-15 21:21:00.040712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63843 ] 00:07:26.906 { 00:07:26.906 "subsystems": [ 00:07:26.906 { 00:07:26.906 "subsystem": "bdev", 00:07:26.906 "config": [ 00:07:26.906 { 00:07:26.906 "params": { 00:07:26.906 "trtype": "pcie", 00:07:26.906 "traddr": "0000:00:10.0", 00:07:26.906 "name": "Nvme0" 00:07:26.906 }, 00:07:26.906 "method": "bdev_nvme_attach_controller" 00:07:26.906 }, 00:07:26.906 { 00:07:26.906 "params": { 00:07:26.906 "trtype": "pcie", 00:07:26.906 "traddr": "0000:00:11.0", 00:07:26.906 "name": "Nvme1" 00:07:26.906 }, 00:07:26.906 "method": "bdev_nvme_attach_controller" 00:07:26.906 }, 00:07:26.906 { 00:07:26.906 "method": "bdev_wait_for_examine" 00:07:26.906 } 00:07:26.906 ] 00:07:26.906 } 00:07:26.906 ] 00:07:26.906 } 00:07:26.906 [2024-07-15 21:21:00.180686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.165 [2024-07-15 21:21:00.277608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.165 [2024-07-15 21:21:00.319389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.424  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:27.424 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:27.424 21:21:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:27.424 [2024-07-15 21:21:00.717801] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:27.424 [2024-07-15 21:21:00.717873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63854 ] 00:07:27.424 { 00:07:27.424 "subsystems": [ 00:07:27.424 { 00:07:27.424 "subsystem": "bdev", 00:07:27.424 "config": [ 00:07:27.424 { 00:07:27.424 "params": { 00:07:27.424 "trtype": "pcie", 00:07:27.424 "traddr": "0000:00:10.0", 00:07:27.424 "name": "Nvme0" 00:07:27.424 }, 00:07:27.424 "method": "bdev_nvme_attach_controller" 00:07:27.424 }, 00:07:27.424 { 00:07:27.424 "params": { 00:07:27.424 "trtype": "pcie", 00:07:27.424 "traddr": "0000:00:11.0", 00:07:27.424 "name": "Nvme1" 00:07:27.424 }, 00:07:27.424 "method": "bdev_nvme_attach_controller" 00:07:27.424 }, 00:07:27.424 { 00:07:27.424 "method": "bdev_wait_for_examine" 00:07:27.424 } 00:07:27.424 ] 00:07:27.424 } 00:07:27.424 ] 00:07:27.424 } 00:07:27.682 [2024-07-15 21:21:00.858752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.683 [2024-07-15 21:21:00.947492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.683 [2024-07-15 21:21:00.988677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.199  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:28.199 00:07:28.199 21:21:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:28.199 ************************************ 00:07:28.199 END TEST spdk_dd_bdev_to_bdev 00:07:28.199 ************************************ 00:07:28.199 00:07:28.199 real 0m7.105s 00:07:28.199 user 0m5.232s 00:07:28.199 sys 0m3.281s 00:07:28.199 21:21:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.199 21:21:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:28.200 21:21:01 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:28.200 21:21:01 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:28.200 21:21:01 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:28.200 21:21:01 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.200 21:21:01 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.200 21:21:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:28.200 ************************************ 00:07:28.200 START TEST spdk_dd_uring 00:07:28.200 ************************************ 00:07:28.200 21:21:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:28.200 * Looking for test storage... 00:07:28.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.200 21:21:01 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:28.459 ************************************ 00:07:28.459 START TEST dd_uring_copy 00:07:28.459 ************************************ 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=qrz5std9b8inog7p6xpevesqixbw2hj7xauv89amjwst9vkdyqiqbgfdd4tlp2sqzwlrizxi3o9ywrpn585ncy4dz27bvjtttj6joh7oj4sy3mpwwah1du19mncpytaxwfa1opmtyyo1idejqip1mdhyqhtzlqfrjrq851kututql0qd6s8gqlih4d8rdr3tms6jz6n5q2cwd8ng7d5lb31d5tfs90dxo0nrkrefa6zxy5woz6urrzkjx3f0d5bvhjxwpzi3qt43yfvb6gqemohn1bu6vw1mmhsgccg380nla88ocasz2k4mdlrfpqit0isdrxo5n4zfb3prs0106ujtz4nsic19odxkdobad1pkzlukr8najm2ubau1pnf2r0lhj8wne1b37mlc2nplz4jofxzl0pw88olrps02t6zondpw9gkmxsbwovg9xribyus4q4cfwp3egowcsts4ktuq1q2wojenv0wfzmex699me6l12tekfjd8n4dc7679nnrv0o6q86h3792iz6g7opruhlpkiq416h2hh3qvr6ub8bqiaw9qq93pv86gglvsla292mvkk625vt56y449s062pjc62rmpgcejm6x0x9ijqicv92qrpvcixdlg59mmcpzhhql8tmoi6xvlxu8mwf5b93rhqp2aug0ore3tofp7yhcynltuepdzhwhvcudomh6vdrloepnv0a1qyqjhmvgj43x3tk7jv5nyxwq9poqbail7qooz3tc8cq7d1cfsuxj06m09cj9dkrvaz6kszvktpn92wjpy86wcp2wrz2rov4mc7eqkqqbpsulnhowa7igx48825uf4jnsft2my97r21kopzijh48xt6lnccnnr9l4tqaa5sbl2jeo5kzrg3nch2ic0iaymtr0y4v1eiog9pvunbpda5gkvjkr588hvemlpzpg9cqligsbxp28rhe7anfp6ihrv2dem5k6myjgcty6epi04m4wjk4isk6xyr2empl5bav9omyvhc5zc 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo qrz5std9b8inog7p6xpevesqixbw2hj7xauv89amjwst9vkdyqiqbgfdd4tlp2sqzwlrizxi3o9ywrpn585ncy4dz27bvjtttj6joh7oj4sy3mpwwah1du19mncpytaxwfa1opmtyyo1idejqip1mdhyqhtzlqfrjrq851kututql0qd6s8gqlih4d8rdr3tms6jz6n5q2cwd8ng7d5lb31d5tfs90dxo0nrkrefa6zxy5woz6urrzkjx3f0d5bvhjxwpzi3qt43yfvb6gqemohn1bu6vw1mmhsgccg380nla88ocasz2k4mdlrfpqit0isdrxo5n4zfb3prs0106ujtz4nsic19odxkdobad1pkzlukr8najm2ubau1pnf2r0lhj8wne1b37mlc2nplz4jofxzl0pw88olrps02t6zondpw9gkmxsbwovg9xribyus4q4cfwp3egowcsts4ktuq1q2wojenv0wfzmex699me6l12tekfjd8n4dc7679nnrv0o6q86h3792iz6g7opruhlpkiq416h2hh3qvr6ub8bqiaw9qq93pv86gglvsla292mvkk625vt56y449s062pjc62rmpgcejm6x0x9ijqicv92qrpvcixdlg59mmcpzhhql8tmoi6xvlxu8mwf5b93rhqp2aug0ore3tofp7yhcynltuepdzhwhvcudomh6vdrloepnv0a1qyqjhmvgj43x3tk7jv5nyxwq9poqbail7qooz3tc8cq7d1cfsuxj06m09cj9dkrvaz6kszvktpn92wjpy86wcp2wrz2rov4mc7eqkqqbpsulnhowa7igx48825uf4jnsft2my97r21kopzijh48xt6lnccnnr9l4tqaa5sbl2jeo5kzrg3nch2ic0iaymtr0y4v1eiog9pvunbpda5gkvjkr588hvemlpzpg9cqligsbxp28rhe7anfp6ihrv2dem5k6myjgcty6epi04m4wjk4isk6xyr2empl5bav9omyvhc5zc 00:07:28.459 21:21:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:28.459 [2024-07-15 21:21:01.669201] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:28.459 [2024-07-15 21:21:01.669357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63924 ] 00:07:28.459 [2024-07-15 21:21:01.812166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.718 [2024-07-15 21:21:01.897827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.718 [2024-07-15 21:21:01.938508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.572  Copying: 511/511 [MB] (average 1861 MBps) 00:07:29.572 00:07:29.572 21:21:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:29.572 21:21:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:29.572 21:21:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:29.572 21:21:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.572 [2024-07-15 21:21:02.755193] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:29.572 [2024-07-15 21:21:02.755259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:07:29.572 { 00:07:29.572 "subsystems": [ 00:07:29.572 { 00:07:29.572 "subsystem": "bdev", 00:07:29.572 "config": [ 00:07:29.572 { 00:07:29.572 "params": { 00:07:29.572 "block_size": 512, 00:07:29.572 "num_blocks": 1048576, 00:07:29.572 "name": "malloc0" 00:07:29.572 }, 00:07:29.572 "method": "bdev_malloc_create" 00:07:29.572 }, 00:07:29.572 { 00:07:29.572 "params": { 00:07:29.572 "filename": "/dev/zram1", 00:07:29.572 "name": "uring0" 00:07:29.572 }, 00:07:29.572 "method": "bdev_uring_create" 00:07:29.572 }, 00:07:29.572 { 00:07:29.572 "method": "bdev_wait_for_examine" 00:07:29.572 } 00:07:29.572 ] 00:07:29.572 } 00:07:29.572 ] 00:07:29.572 } 00:07:29.572 [2024-07-15 21:21:02.896847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.830 [2024-07-15 21:21:02.986661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.830 [2024-07-15 21:21:03.028129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.336  Copying: 264/512 [MB] (264 MBps) Copying: 512/512 [MB] (average 265 MBps) 00:07:32.336 00:07:32.336 21:21:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:32.336 21:21:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:32.336 21:21:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:32.336 21:21:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.336 { 00:07:32.336 "subsystems": [ 00:07:32.336 { 00:07:32.336 "subsystem": "bdev", 00:07:32.336 "config": [ 00:07:32.336 { 00:07:32.336 "params": { 00:07:32.336 "block_size": 512, 00:07:32.336 "num_blocks": 1048576, 00:07:32.336 "name": "malloc0" 00:07:32.336 }, 00:07:32.336 "method": "bdev_malloc_create" 00:07:32.336 }, 00:07:32.336 { 00:07:32.336 "params": { 00:07:32.336 "filename": "/dev/zram1", 00:07:32.336 "name": "uring0" 00:07:32.336 }, 00:07:32.336 "method": "bdev_uring_create" 00:07:32.336 }, 00:07:32.336 { 00:07:32.336 "method": "bdev_wait_for_examine" 00:07:32.336 } 00:07:32.336 ] 00:07:32.336 } 00:07:32.336 ] 00:07:32.336 } 00:07:32.336 [2024-07-15 21:21:05.493570] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:32.336 [2024-07-15 21:21:05.493655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63985 ] 00:07:32.336 [2024-07-15 21:21:05.636997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.594 [2024-07-15 21:21:05.723057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.594 [2024-07-15 21:21:05.764242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.455  Copying: 226/512 [MB] (226 MBps) Copying: 428/512 [MB] (202 MBps) Copying: 512/512 [MB] (average 216 MBps) 00:07:35.455 00:07:35.455 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:35.456 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ qrz5std9b8inog7p6xpevesqixbw2hj7xauv89amjwst9vkdyqiqbgfdd4tlp2sqzwlrizxi3o9ywrpn585ncy4dz27bvjtttj6joh7oj4sy3mpwwah1du19mncpytaxwfa1opmtyyo1idejqip1mdhyqhtzlqfrjrq851kututql0qd6s8gqlih4d8rdr3tms6jz6n5q2cwd8ng7d5lb31d5tfs90dxo0nrkrefa6zxy5woz6urrzkjx3f0d5bvhjxwpzi3qt43yfvb6gqemohn1bu6vw1mmhsgccg380nla88ocasz2k4mdlrfpqit0isdrxo5n4zfb3prs0106ujtz4nsic19odxkdobad1pkzlukr8najm2ubau1pnf2r0lhj8wne1b37mlc2nplz4jofxzl0pw88olrps02t6zondpw9gkmxsbwovg9xribyus4q4cfwp3egowcsts4ktuq1q2wojenv0wfzmex699me6l12tekfjd8n4dc7679nnrv0o6q86h3792iz6g7opruhlpkiq416h2hh3qvr6ub8bqiaw9qq93pv86gglvsla292mvkk625vt56y449s062pjc62rmpgcejm6x0x9ijqicv92qrpvcixdlg59mmcpzhhql8tmoi6xvlxu8mwf5b93rhqp2aug0ore3tofp7yhcynltuepdzhwhvcudomh6vdrloepnv0a1qyqjhmvgj43x3tk7jv5nyxwq9poqbail7qooz3tc8cq7d1cfsuxj06m09cj9dkrvaz6kszvktpn92wjpy86wcp2wrz2rov4mc7eqkqqbpsulnhowa7igx48825uf4jnsft2my97r21kopzijh48xt6lnccnnr9l4tqaa5sbl2jeo5kzrg3nch2ic0iaymtr0y4v1eiog9pvunbpda5gkvjkr588hvemlpzpg9cqligsbxp28rhe7anfp6ihrv2dem5k6myjgcty6epi04m4wjk4isk6xyr2empl5bav9omyvhc5zc == \q\r\z\5\s\t\d\9\b\8\i\n\o\g\7\p\6\x\p\e\v\e\s\q\i\x\b\w\2\h\j\7\x\a\u\v\8\9\a\m\j\w\s\t\9\v\k\d\y\q\i\q\b\g\f\d\d\4\t\l\p\2\s\q\z\w\l\r\i\z\x\i\3\o\9\y\w\r\p\n\5\8\5\n\c\y\4\d\z\2\7\b\v\j\t\t\t\j\6\j\o\h\7\o\j\4\s\y\3\m\p\w\w\a\h\1\d\u\1\9\m\n\c\p\y\t\a\x\w\f\a\1\o\p\m\t\y\y\o\1\i\d\e\j\q\i\p\1\m\d\h\y\q\h\t\z\l\q\f\r\j\r\q\8\5\1\k\u\t\u\t\q\l\0\q\d\6\s\8\g\q\l\i\h\4\d\8\r\d\r\3\t\m\s\6\j\z\6\n\5\q\2\c\w\d\8\n\g\7\d\5\l\b\3\1\d\5\t\f\s\9\0\d\x\o\0\n\r\k\r\e\f\a\6\z\x\y\5\w\o\z\6\u\r\r\z\k\j\x\3\f\0\d\5\b\v\h\j\x\w\p\z\i\3\q\t\4\3\y\f\v\b\6\g\q\e\m\o\h\n\1\b\u\6\v\w\1\m\m\h\s\g\c\c\g\3\8\0\n\l\a\8\8\o\c\a\s\z\2\k\4\m\d\l\r\f\p\q\i\t\0\i\s\d\r\x\o\5\n\4\z\f\b\3\p\r\s\0\1\0\6\u\j\t\z\4\n\s\i\c\1\9\o\d\x\k\d\o\b\a\d\1\p\k\z\l\u\k\r\8\n\a\j\m\2\u\b\a\u\1\p\n\f\2\r\0\l\h\j\8\w\n\e\1\b\3\7\m\l\c\2\n\p\l\z\4\j\o\f\x\z\l\0\p\w\8\8\o\l\r\p\s\0\2\t\6\z\o\n\d\p\w\9\g\k\m\x\s\b\w\o\v\g\9\x\r\i\b\y\u\s\4\q\4\c\f\w\p\3\e\g\o\w\c\s\t\s\4\k\t\u\q\1\q\2\w\o\j\e\n\v\0\w\f\z\m\e\x\6\9\9\m\e\6\l\1\2\t\e\k\f\j\d\8\n\4\d\c\7\6\7\9\n\n\r\v\0\o\6\q\8\6\h\3\7\9\2\i\z\6\g\7\o\p\r\u\h\l\p\k\i\q\4\1\6\h\2\h\h\3\q\v\r\6\u\b\8\b\q\i\a\w\9\q\q\9\3\p\v\8\6\g\g\l\v\s\l\a\2\9\2\m\v\k\k\6\2\5\v\t\5\6\y\4\4\9\s\0\6\2\p\j\c\6\2\r\m\p\g\c\e\j\m\6\x\0\x\9\i\j\q\i\c\v\9\2\q\r\p\v\c\i\x\d\l\g\5\9\m\m\c\p\z\h\h\q\l\8\t\m\o\i\6\x\v\l\x\u\8\m\w\f\5\b\9\3\r\h\q\p\2\a\u\g\0\o\r\e\3\t\o\f\p\7\y\h\c\y\n\l\t\u\e\p\d\z\h\w\h\v\c\u\d\o\m\h\6\v\d\r\l\o\e\p\n\v\0\a\1\q\y\q\j\h\m\v\g\j\4\3\x\3\t\k\7\j\v\5\n\y\x\w\q\9\p\o\q\b\a\i\l\7\q\o\o\z\3\t\c\8\c\q\7\d\1\c\f\s\u\x\j\0\6\m\0\9\c\j\9\d\k\r\v\a\z\6\k\s\z\v\k\t\p\n\9\2\w\j\p\y\8\6\w\c\p\2\w\r\z\2\r\o\v\4\m\c\7\e\q\k\q\q\b\p\s\u\l\n\h\o\w\a\7\i\g\x\4\8\8\2\5\u\f\4\j\n\s\f\t\2\m\y\9\7\r\2\1\k\o\p\z\i\j\h\4\8\x\t\6\l\n\c\c\n\n\r\9\l\4\t\q\a\a\5\s\b\l\2\j\e\o\5\k\z\r\g\3\n\c\h\2\i\c\0\i\a\y\m\t\r\0\y\4\v\1\e\i\o\g\9\p\v\u\n\b\p\d\a\5\g\k\v\j\k\r\5\8\8\h\v\e\m\l\p\z\p\g\9\c\q\l\i\g\s\b\x\p\2\8\r\h\e\7\a\n\f\p\6\i\h\r\v\2\d\e\m\5\k\6\m\y\j\g\c\t\y\6\e\p\i\0\4\m\4\w\j\k\4\i\s\k\6\x\y\r\2\e\m\p\l\5\b\a\v\9\o\m\y\v\h\c\5\z\c ]] 00:07:35.456 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:35.456 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ qrz5std9b8inog7p6xpevesqixbw2hj7xauv89amjwst9vkdyqiqbgfdd4tlp2sqzwlrizxi3o9ywrpn585ncy4dz27bvjtttj6joh7oj4sy3mpwwah1du19mncpytaxwfa1opmtyyo1idejqip1mdhyqhtzlqfrjrq851kututql0qd6s8gqlih4d8rdr3tms6jz6n5q2cwd8ng7d5lb31d5tfs90dxo0nrkrefa6zxy5woz6urrzkjx3f0d5bvhjxwpzi3qt43yfvb6gqemohn1bu6vw1mmhsgccg380nla88ocasz2k4mdlrfpqit0isdrxo5n4zfb3prs0106ujtz4nsic19odxkdobad1pkzlukr8najm2ubau1pnf2r0lhj8wne1b37mlc2nplz4jofxzl0pw88olrps02t6zondpw9gkmxsbwovg9xribyus4q4cfwp3egowcsts4ktuq1q2wojenv0wfzmex699me6l12tekfjd8n4dc7679nnrv0o6q86h3792iz6g7opruhlpkiq416h2hh3qvr6ub8bqiaw9qq93pv86gglvsla292mvkk625vt56y449s062pjc62rmpgcejm6x0x9ijqicv92qrpvcixdlg59mmcpzhhql8tmoi6xvlxu8mwf5b93rhqp2aug0ore3tofp7yhcynltuepdzhwhvcudomh6vdrloepnv0a1qyqjhmvgj43x3tk7jv5nyxwq9poqbail7qooz3tc8cq7d1cfsuxj06m09cj9dkrvaz6kszvktpn92wjpy86wcp2wrz2rov4mc7eqkqqbpsulnhowa7igx48825uf4jnsft2my97r21kopzijh48xt6lnccnnr9l4tqaa5sbl2jeo5kzrg3nch2ic0iaymtr0y4v1eiog9pvunbpda5gkvjkr588hvemlpzpg9cqligsbxp28rhe7anfp6ihrv2dem5k6myjgcty6epi04m4wjk4isk6xyr2empl5bav9omyvhc5zc == \q\r\z\5\s\t\d\9\b\8\i\n\o\g\7\p\6\x\p\e\v\e\s\q\i\x\b\w\2\h\j\7\x\a\u\v\8\9\a\m\j\w\s\t\9\v\k\d\y\q\i\q\b\g\f\d\d\4\t\l\p\2\s\q\z\w\l\r\i\z\x\i\3\o\9\y\w\r\p\n\5\8\5\n\c\y\4\d\z\2\7\b\v\j\t\t\t\j\6\j\o\h\7\o\j\4\s\y\3\m\p\w\w\a\h\1\d\u\1\9\m\n\c\p\y\t\a\x\w\f\a\1\o\p\m\t\y\y\o\1\i\d\e\j\q\i\p\1\m\d\h\y\q\h\t\z\l\q\f\r\j\r\q\8\5\1\k\u\t\u\t\q\l\0\q\d\6\s\8\g\q\l\i\h\4\d\8\r\d\r\3\t\m\s\6\j\z\6\n\5\q\2\c\w\d\8\n\g\7\d\5\l\b\3\1\d\5\t\f\s\9\0\d\x\o\0\n\r\k\r\e\f\a\6\z\x\y\5\w\o\z\6\u\r\r\z\k\j\x\3\f\0\d\5\b\v\h\j\x\w\p\z\i\3\q\t\4\3\y\f\v\b\6\g\q\e\m\o\h\n\1\b\u\6\v\w\1\m\m\h\s\g\c\c\g\3\8\0\n\l\a\8\8\o\c\a\s\z\2\k\4\m\d\l\r\f\p\q\i\t\0\i\s\d\r\x\o\5\n\4\z\f\b\3\p\r\s\0\1\0\6\u\j\t\z\4\n\s\i\c\1\9\o\d\x\k\d\o\b\a\d\1\p\k\z\l\u\k\r\8\n\a\j\m\2\u\b\a\u\1\p\n\f\2\r\0\l\h\j\8\w\n\e\1\b\3\7\m\l\c\2\n\p\l\z\4\j\o\f\x\z\l\0\p\w\8\8\o\l\r\p\s\0\2\t\6\z\o\n\d\p\w\9\g\k\m\x\s\b\w\o\v\g\9\x\r\i\b\y\u\s\4\q\4\c\f\w\p\3\e\g\o\w\c\s\t\s\4\k\t\u\q\1\q\2\w\o\j\e\n\v\0\w\f\z\m\e\x\6\9\9\m\e\6\l\1\2\t\e\k\f\j\d\8\n\4\d\c\7\6\7\9\n\n\r\v\0\o\6\q\8\6\h\3\7\9\2\i\z\6\g\7\o\p\r\u\h\l\p\k\i\q\4\1\6\h\2\h\h\3\q\v\r\6\u\b\8\b\q\i\a\w\9\q\q\9\3\p\v\8\6\g\g\l\v\s\l\a\2\9\2\m\v\k\k\6\2\5\v\t\5\6\y\4\4\9\s\0\6\2\p\j\c\6\2\r\m\p\g\c\e\j\m\6\x\0\x\9\i\j\q\i\c\v\9\2\q\r\p\v\c\i\x\d\l\g\5\9\m\m\c\p\z\h\h\q\l\8\t\m\o\i\6\x\v\l\x\u\8\m\w\f\5\b\9\3\r\h\q\p\2\a\u\g\0\o\r\e\3\t\o\f\p\7\y\h\c\y\n\l\t\u\e\p\d\z\h\w\h\v\c\u\d\o\m\h\6\v\d\r\l\o\e\p\n\v\0\a\1\q\y\q\j\h\m\v\g\j\4\3\x\3\t\k\7\j\v\5\n\y\x\w\q\9\p\o\q\b\a\i\l\7\q\o\o\z\3\t\c\8\c\q\7\d\1\c\f\s\u\x\j\0\6\m\0\9\c\j\9\d\k\r\v\a\z\6\k\s\z\v\k\t\p\n\9\2\w\j\p\y\8\6\w\c\p\2\w\r\z\2\r\o\v\4\m\c\7\e\q\k\q\q\b\p\s\u\l\n\h\o\w\a\7\i\g\x\4\8\8\2\5\u\f\4\j\n\s\f\t\2\m\y\9\7\r\2\1\k\o\p\z\i\j\h\4\8\x\t\6\l\n\c\c\n\n\r\9\l\4\t\q\a\a\5\s\b\l\2\j\e\o\5\k\z\r\g\3\n\c\h\2\i\c\0\i\a\y\m\t\r\0\y\4\v\1\e\i\o\g\9\p\v\u\n\b\p\d\a\5\g\k\v\j\k\r\5\8\8\h\v\e\m\l\p\z\p\g\9\c\q\l\i\g\s\b\x\p\2\8\r\h\e\7\a\n\f\p\6\i\h\r\v\2\d\e\m\5\k\6\m\y\j\g\c\t\y\6\e\p\i\0\4\m\4\w\j\k\4\i\s\k\6\x\y\r\2\e\m\p\l\5\b\a\v\9\o\m\y\v\h\c\5\z\c ]] 00:07:35.456 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:35.714 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:35.714 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:35.714 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:35.714 21:21:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.714 [2024-07-15 21:21:09.044024] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:35.714 [2024-07-15 21:21:09.044091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64037 ] 00:07:35.714 { 00:07:35.714 "subsystems": [ 00:07:35.714 { 00:07:35.714 "subsystem": "bdev", 00:07:35.714 "config": [ 00:07:35.714 { 00:07:35.714 "params": { 00:07:35.714 "block_size": 512, 00:07:35.714 "num_blocks": 1048576, 00:07:35.714 "name": "malloc0" 00:07:35.714 }, 00:07:35.714 "method": "bdev_malloc_create" 00:07:35.714 }, 00:07:35.714 { 00:07:35.714 "params": { 00:07:35.714 "filename": "/dev/zram1", 00:07:35.714 "name": "uring0" 00:07:35.714 }, 00:07:35.714 "method": "bdev_uring_create" 00:07:35.714 }, 00:07:35.714 { 00:07:35.714 "method": "bdev_wait_for_examine" 00:07:35.714 } 00:07:35.714 ] 00:07:35.714 } 00:07:35.714 ] 00:07:35.714 } 00:07:35.971 [2024-07-15 21:21:09.184540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.971 [2024-07-15 21:21:09.271749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.971 [2024-07-15 21:21:09.312754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.108  Copying: 199/512 [MB] (199 MBps) Copying: 397/512 [MB] (198 MBps) Copying: 512/512 [MB] (average 198 MBps) 00:07:39.108 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:39.108 21:21:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.108 [2024-07-15 21:21:12.416167] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:39.108 [2024-07-15 21:21:12.416231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64088 ] 00:07:39.108 { 00:07:39.108 "subsystems": [ 00:07:39.108 { 00:07:39.108 "subsystem": "bdev", 00:07:39.108 "config": [ 00:07:39.108 { 00:07:39.108 "params": { 00:07:39.108 "block_size": 512, 00:07:39.108 "num_blocks": 1048576, 00:07:39.108 "name": "malloc0" 00:07:39.108 }, 00:07:39.108 "method": "bdev_malloc_create" 00:07:39.108 }, 00:07:39.108 { 00:07:39.108 "params": { 00:07:39.108 "filename": "/dev/zram1", 00:07:39.108 "name": "uring0" 00:07:39.108 }, 00:07:39.108 "method": "bdev_uring_create" 00:07:39.108 }, 00:07:39.108 { 00:07:39.108 "params": { 00:07:39.108 "name": "uring0" 00:07:39.108 }, 00:07:39.108 "method": "bdev_uring_delete" 00:07:39.108 }, 00:07:39.108 { 00:07:39.108 "method": "bdev_wait_for_examine" 00:07:39.108 } 00:07:39.108 ] 00:07:39.108 } 00:07:39.108 ] 00:07:39.108 } 00:07:39.367 [2024-07-15 21:21:12.555009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.367 [2024-07-15 21:21:12.644394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.367 [2024-07-15 21:21:12.685470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.885  Copying: 0/0 [B] (average 0 Bps) 00:07:39.885 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.885 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:39.885 [2024-07-15 21:21:13.230911] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:39.885 [2024-07-15 21:21:13.230979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64111 ] 00:07:39.885 { 00:07:39.885 "subsystems": [ 00:07:39.885 { 00:07:39.885 "subsystem": "bdev", 00:07:39.885 "config": [ 00:07:39.885 { 00:07:39.885 "params": { 00:07:39.885 "block_size": 512, 00:07:39.885 "num_blocks": 1048576, 00:07:39.885 "name": "malloc0" 00:07:39.885 }, 00:07:39.885 "method": "bdev_malloc_create" 00:07:39.885 }, 00:07:39.885 { 00:07:39.885 "params": { 00:07:39.885 "filename": "/dev/zram1", 00:07:39.885 "name": "uring0" 00:07:39.885 }, 00:07:39.885 "method": "bdev_uring_create" 00:07:39.885 }, 00:07:39.885 { 00:07:39.885 "params": { 00:07:39.885 "name": "uring0" 00:07:39.885 }, 00:07:39.885 "method": "bdev_uring_delete" 00:07:39.885 }, 00:07:39.885 { 00:07:39.885 "method": "bdev_wait_for_examine" 00:07:39.885 } 00:07:39.885 ] 00:07:39.885 } 00:07:39.885 ] 00:07:39.885 } 00:07:40.143 [2024-07-15 21:21:13.372863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.143 [2024-07-15 21:21:13.466658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.143 [2024-07-15 21:21:13.507724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.434 [2024-07-15 21:21:13.669668] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:40.434 [2024-07-15 21:21:13.669711] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:40.434 [2024-07-15 21:21:13.669720] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:40.434 [2024-07-15 21:21:13.669730] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.770 [2024-07-15 21:21:13.912866] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:40.770 21:21:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:40.770 21:21:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:41.029 00:07:41.029 ************************************ 00:07:41.029 END TEST dd_uring_copy 00:07:41.029 ************************************ 00:07:41.029 real 0m12.618s 00:07:41.029 user 0m8.257s 00:07:41.029 sys 0m10.499s 00:07:41.029 21:21:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.029 21:21:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.029 21:21:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:41.029 ************************************ 00:07:41.029 END TEST spdk_dd_uring 00:07:41.029 ************************************ 00:07:41.029 00:07:41.029 real 0m12.812s 00:07:41.029 user 0m8.334s 00:07:41.029 sys 0m10.619s 00:07:41.029 21:21:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.029 21:21:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:41.029 21:21:14 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:41.029 21:21:14 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:41.029 21:21:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.029 21:21:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.029 21:21:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:41.029 ************************************ 00:07:41.029 START TEST spdk_dd_sparse 00:07:41.029 ************************************ 00:07:41.029 21:21:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:41.288 * Looking for test storage... 00:07:41.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:41.288 21:21:14 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.288 21:21:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.288 21:21:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.288 21:21:14 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.288 21:21:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:41.289 1+0 records in 00:07:41.289 1+0 records out 00:07:41.289 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00898878 s, 467 MB/s 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:41.289 1+0 records in 00:07:41.289 1+0 records out 00:07:41.289 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00953273 s, 440 MB/s 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:41.289 1+0 records in 00:07:41.289 1+0 records out 00:07:41.289 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00902694 s, 465 MB/s 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:41.289 ************************************ 00:07:41.289 START TEST dd_sparse_file_to_file 00:07:41.289 ************************************ 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:41.289 21:21:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:41.289 { 00:07:41.289 "subsystems": [ 00:07:41.289 { 00:07:41.289 "subsystem": "bdev", 00:07:41.289 "config": [ 00:07:41.289 { 00:07:41.289 "params": { 00:07:41.289 "block_size": 4096, 00:07:41.289 "filename": "dd_sparse_aio_disk", 00:07:41.289 "name": "dd_aio" 00:07:41.289 }, 00:07:41.289 "method": "bdev_aio_create" 00:07:41.289 }, 00:07:41.289 { 00:07:41.289 "params": { 00:07:41.289 "lvs_name": "dd_lvstore", 00:07:41.289 "bdev_name": "dd_aio" 00:07:41.289 }, 00:07:41.289 "method": "bdev_lvol_create_lvstore" 00:07:41.289 }, 00:07:41.289 { 00:07:41.289 "method": "bdev_wait_for_examine" 00:07:41.289 } 00:07:41.289 ] 00:07:41.289 } 00:07:41.289 ] 00:07:41.289 } 00:07:41.289 [2024-07-15 21:21:14.560501] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:41.289 [2024-07-15 21:21:14.560567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64203 ] 00:07:41.548 [2024-07-15 21:21:14.701019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.548 [2024-07-15 21:21:14.785424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.548 [2024-07-15 21:21:14.826571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.808  Copying: 12/36 [MB] (average 800 MBps) 00:07:41.808 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:41.808 ************************************ 00:07:41.808 END TEST dd_sparse_file_to_file 00:07:41.808 ************************************ 00:07:41.808 00:07:41.808 real 0m0.642s 00:07:41.808 user 0m0.379s 00:07:41.808 sys 0m0.324s 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.808 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:42.067 ************************************ 00:07:42.067 START TEST dd_sparse_file_to_bdev 00:07:42.067 ************************************ 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:42.067 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.067 [2024-07-15 21:21:15.266367] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:42.067 [2024-07-15 21:21:15.266432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64245 ] 00:07:42.067 { 00:07:42.067 "subsystems": [ 00:07:42.067 { 00:07:42.067 "subsystem": "bdev", 00:07:42.067 "config": [ 00:07:42.067 { 00:07:42.067 "params": { 00:07:42.067 "block_size": 4096, 00:07:42.067 "filename": "dd_sparse_aio_disk", 00:07:42.067 "name": "dd_aio" 00:07:42.067 }, 00:07:42.067 "method": "bdev_aio_create" 00:07:42.067 }, 00:07:42.067 { 00:07:42.067 "params": { 00:07:42.067 "lvs_name": "dd_lvstore", 00:07:42.067 "lvol_name": "dd_lvol", 00:07:42.067 "size_in_mib": 36, 00:07:42.067 "thin_provision": true 00:07:42.067 }, 00:07:42.067 "method": "bdev_lvol_create" 00:07:42.067 }, 00:07:42.067 { 00:07:42.067 "method": "bdev_wait_for_examine" 00:07:42.067 } 00:07:42.067 ] 00:07:42.067 } 00:07:42.067 ] 00:07:42.067 } 00:07:42.067 [2024-07-15 21:21:15.406311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.325 [2024-07-15 21:21:15.488613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.325 [2024-07-15 21:21:15.529838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.584  Copying: 12/36 [MB] (average 461 MBps) 00:07:42.584 00:07:42.584 00:07:42.584 real 0m0.595s 00:07:42.584 user 0m0.380s 00:07:42.584 sys 0m0.300s 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:42.584 ************************************ 00:07:42.584 END TEST dd_sparse_file_to_bdev 00:07:42.584 ************************************ 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:42.584 ************************************ 00:07:42.584 START TEST dd_sparse_bdev_to_file 00:07:42.584 ************************************ 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:42.584 21:21:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:42.584 [2024-07-15 21:21:15.934251] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:42.584 [2024-07-15 21:21:15.934316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64278 ] 00:07:42.584 { 00:07:42.584 "subsystems": [ 00:07:42.584 { 00:07:42.584 "subsystem": "bdev", 00:07:42.584 "config": [ 00:07:42.584 { 00:07:42.584 "params": { 00:07:42.584 "block_size": 4096, 00:07:42.584 "filename": "dd_sparse_aio_disk", 00:07:42.584 "name": "dd_aio" 00:07:42.584 }, 00:07:42.584 "method": "bdev_aio_create" 00:07:42.584 }, 00:07:42.584 { 00:07:42.584 "method": "bdev_wait_for_examine" 00:07:42.584 } 00:07:42.584 ] 00:07:42.584 } 00:07:42.584 ] 00:07:42.584 } 00:07:42.843 [2024-07-15 21:21:16.075090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.843 [2024-07-15 21:21:16.161116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.843 [2024-07-15 21:21:16.202270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.104  Copying: 12/36 [MB] (average 750 MBps) 00:07:43.104 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:43.363 ************************************ 00:07:43.363 END TEST dd_sparse_bdev_to_file 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:43.363 00:07:43.363 real 0m0.632s 00:07:43.363 user 0m0.388s 00:07:43.363 sys 0m0.323s 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:43.363 ************************************ 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:43.363 ************************************ 00:07:43.363 END TEST spdk_dd_sparse 00:07:43.363 ************************************ 00:07:43.363 00:07:43.363 real 0m2.288s 00:07:43.363 user 0m1.292s 00:07:43.363 sys 0m1.224s 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.363 21:21:16 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:43.363 21:21:16 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:43.363 21:21:16 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:43.363 21:21:16 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.363 21:21:16 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.363 21:21:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:43.363 ************************************ 00:07:43.363 START TEST spdk_dd_negative 00:07:43.363 ************************************ 00:07:43.363 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:43.623 * Looking for test storage... 00:07:43.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.623 21:21:16 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.623 21:21:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.623 21:21:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.623 21:21:16 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.623 21:21:16 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.623 21:21:16 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.624 ************************************ 00:07:43.624 START TEST dd_invalid_arguments 00:07:43.624 ************************************ 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.624 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:43.624 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:43.624 00:07:43.624 CPU options: 00:07:43.624 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:43.624 (like [0,1,10]) 00:07:43.624 --lcores lcore to CPU mapping list. The list is in the format: 00:07:43.624 [<,lcores[@CPUs]>...] 00:07:43.624 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:43.624 Within the group, '-' is used for range separator, 00:07:43.624 ',' is used for single number separator. 00:07:43.624 '( )' can be omitted for single element group, 00:07:43.624 '@' can be omitted if cpus and lcores have the same value 00:07:43.624 --disable-cpumask-locks Disable CPU core lock files. 00:07:43.624 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:43.624 pollers in the app support interrupt mode) 00:07:43.624 -p, --main-core main (primary) core for DPDK 00:07:43.624 00:07:43.624 Configuration options: 00:07:43.624 -c, --config, --json JSON config file 00:07:43.624 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:43.624 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:43.624 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:43.624 --rpcs-allowed comma-separated list of permitted RPCS 00:07:43.624 --json-ignore-init-errors don't exit on invalid config entry 00:07:43.624 00:07:43.624 Memory options: 00:07:43.624 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:43.624 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:43.624 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:43.624 -R, --huge-unlink unlink huge files after initialization 00:07:43.624 -n, --mem-channels number of memory channels used for DPDK 00:07:43.624 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:43.624 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:43.624 --no-huge run without using hugepages 00:07:43.624 -i, --shm-id shared memory ID (optional) 00:07:43.624 -g, --single-file-segments force creating just one hugetlbfs file 00:07:43.624 00:07:43.624 PCI options: 00:07:43.624 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:43.624 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:43.624 -u, --no-pci disable PCI access 00:07:43.624 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:43.624 00:07:43.624 Log options: 00:07:43.624 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:43.624 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:43.624 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:43.624 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:43.624 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:43.624 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:43.624 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:43.624 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:43.624 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:43.624 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:43.624 virtio_vfio_user, vmd) 00:07:43.624 --silence-noticelog disable notice level logging to stderr 00:07:43.624 00:07:43.624 Trace options: 00:07:43.624 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:43.624 setting 0 to disable trace (default 32768) 00:07:43.624 Tracepoints vary in size and can use more than one trace entry. 00:07:43.624 -e, --tpoint-group [:] 00:07:43.624 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:43.624 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:43.624 [2024-07-15 21:21:16.885060] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:43.624 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:43.624 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:43.624 a tracepoint group. First tpoint inside a group can be enabled by 00:07:43.624 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:43.624 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:43.624 in /include/spdk_internal/trace_defs.h 00:07:43.624 00:07:43.624 Other options: 00:07:43.624 -h, --help show this usage 00:07:43.624 -v, --version print SPDK version 00:07:43.624 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:43.624 --env-context Opaque context for use of the env implementation 00:07:43.624 00:07:43.624 Application specific: 00:07:43.624 [--------- DD Options ---------] 00:07:43.624 --if Input file. Must specify either --if or --ib. 00:07:43.624 --ib Input bdev. Must specifier either --if or --ib 00:07:43.624 --of Output file. Must specify either --of or --ob. 00:07:43.624 --ob Output bdev. Must specify either --of or --ob. 00:07:43.624 --iflag Input file flags. 00:07:43.624 --oflag Output file flags. 00:07:43.624 --bs I/O unit size (default: 4096) 00:07:43.624 --qd Queue depth (default: 2) 00:07:43.624 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:43.624 --skip Skip this many I/O units at start of input. (default: 0) 00:07:43.624 --seek Skip this many I/O units at start of output. (default: 0) 00:07:43.624 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:43.625 --sparse Enable hole skipping in input target 00:07:43.625 Available iflag and oflag values: 00:07:43.625 append - append mode 00:07:43.625 direct - use direct I/O for data 00:07:43.625 directory - fail unless a directory 00:07:43.625 dsync - use synchronized I/O for data 00:07:43.625 noatime - do not update access time 00:07:43.625 noctty - do not assign controlling terminal from file 00:07:43.625 nofollow - do not follow symlinks 00:07:43.625 nonblock - use non-blocking I/O 00:07:43.625 sync - use synchronized I/O for data and metadata 00:07:43.625 ************************************ 00:07:43.625 END TEST dd_invalid_arguments 00:07:43.625 ************************************ 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.625 00:07:43.625 real 0m0.070s 00:07:43.625 user 0m0.036s 00:07:43.625 sys 0m0.032s 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.625 ************************************ 00:07:43.625 START TEST dd_double_input 00:07:43.625 ************************************ 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.625 21:21:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:43.885 [2024-07-15 21:21:17.022441] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:43.885 ************************************ 00:07:43.885 END TEST dd_double_input 00:07:43.885 ************************************ 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.885 00:07:43.885 real 0m0.068s 00:07:43.885 user 0m0.039s 00:07:43.885 sys 0m0.029s 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.885 ************************************ 00:07:43.885 START TEST dd_double_output 00:07:43.885 ************************************ 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:43.885 [2024-07-15 21:21:17.154362] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:43.885 ************************************ 00:07:43.885 END TEST dd_double_output 00:07:43.885 ************************************ 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:43.885 00:07:43.885 real 0m0.066s 00:07:43.885 user 0m0.040s 00:07:43.885 sys 0m0.026s 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.885 ************************************ 00:07:43.885 START TEST dd_no_input 00:07:43.885 ************************************ 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.885 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:44.145 [2024-07-15 21:21:17.286291] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.145 ************************************ 00:07:44.145 END TEST dd_no_input 00:07:44.145 ************************************ 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.145 00:07:44.145 real 0m0.068s 00:07:44.145 user 0m0.036s 00:07:44.145 sys 0m0.031s 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.145 ************************************ 00:07:44.145 START TEST dd_no_output 00:07:44.145 ************************************ 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.145 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.146 [2024-07-15 21:21:17.426591] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.146 00:07:44.146 real 0m0.071s 00:07:44.146 user 0m0.039s 00:07:44.146 sys 0m0.031s 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.146 ************************************ 00:07:44.146 END TEST dd_no_output 00:07:44.146 ************************************ 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.146 ************************************ 00:07:44.146 START TEST dd_wrong_blocksize 00:07:44.146 ************************************ 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:44.146 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:44.406 [2024-07-15 21:21:17.568179] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.406 00:07:44.406 real 0m0.071s 00:07:44.406 user 0m0.029s 00:07:44.406 sys 0m0.040s 00:07:44.406 ************************************ 00:07:44.406 END TEST dd_wrong_blocksize 00:07:44.406 ************************************ 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.406 ************************************ 00:07:44.406 START TEST dd_smaller_blocksize 00:07:44.406 ************************************ 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.406 21:21:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:44.406 [2024-07-15 21:21:17.713478] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:44.406 [2024-07-15 21:21:17.713550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64502 ] 00:07:44.665 [2024-07-15 21:21:17.854758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.665 [2024-07-15 21:21:17.951892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.665 [2024-07-15 21:21:17.992676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.925 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:45.184 [2024-07-15 21:21:18.300182] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:45.184 [2024-07-15 21:21:18.300257] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.184 [2024-07-15 21:21:18.392268] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:45.184 ************************************ 00:07:45.184 END TEST dd_smaller_blocksize 00:07:45.184 ************************************ 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.184 00:07:45.184 real 0m0.823s 00:07:45.184 user 0m0.375s 00:07:45.184 sys 0m0.341s 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.184 ************************************ 00:07:45.184 START TEST dd_invalid_count 00:07:45.184 ************************************ 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.184 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:45.454 [2024-07-15 21:21:18.606057] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:45.454 ************************************ 00:07:45.454 END TEST dd_invalid_count 00:07:45.454 ************************************ 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.454 00:07:45.454 real 0m0.070s 00:07:45.454 user 0m0.043s 00:07:45.454 sys 0m0.026s 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.454 ************************************ 00:07:45.454 START TEST dd_invalid_oflag 00:07:45.454 ************************************ 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:45.454 [2024-07-15 21:21:18.740286] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:45.454 ************************************ 00:07:45.454 END TEST dd_invalid_oflag 00:07:45.454 ************************************ 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.454 00:07:45.454 real 0m0.066s 00:07:45.454 user 0m0.037s 00:07:45.454 sys 0m0.029s 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.454 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.732 ************************************ 00:07:45.732 START TEST dd_invalid_iflag 00:07:45.732 ************************************ 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:45.732 [2024-07-15 21:21:18.881794] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:45.732 ************************************ 00:07:45.732 END TEST dd_invalid_iflag 00:07:45.732 ************************************ 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.732 00:07:45.732 real 0m0.070s 00:07:45.732 user 0m0.039s 00:07:45.732 sys 0m0.030s 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.732 ************************************ 00:07:45.732 START TEST dd_unknown_flag 00:07:45.732 ************************************ 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.732 21:21:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:45.732 [2024-07-15 21:21:19.015991] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:45.732 [2024-07-15 21:21:19.016057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64594 ] 00:07:45.990 [2024-07-15 21:21:19.156326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.990 [2024-07-15 21:21:19.248125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.990 [2024-07-15 21:21:19.289183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.990 [2024-07-15 21:21:19.318550] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:45.990 [2024-07-15 21:21:19.318831] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.990 [2024-07-15 21:21:19.318965] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:45.990 [2024-07-15 21:21:19.319094] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.990 [2024-07-15 21:21:19.319375] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:45.990 [2024-07-15 21:21:19.319400] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.990 [2024-07-15 21:21:19.319466] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:45.990 [2024-07-15 21:21:19.319481] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:46.249 [2024-07-15 21:21:19.410466] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.249 00:07:46.249 real 0m0.536s 00:07:46.249 user 0m0.298s 00:07:46.249 sys 0m0.145s 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.249 ************************************ 00:07:46.249 END TEST dd_unknown_flag 00:07:46.249 ************************************ 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.249 ************************************ 00:07:46.249 START TEST dd_invalid_json 00:07:46.249 ************************************ 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.249 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:46.508 [2024-07-15 21:21:19.625681] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:46.508 [2024-07-15 21:21:19.625749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64628 ] 00:07:46.508 [2024-07-15 21:21:19.765434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.508 [2024-07-15 21:21:19.849305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.508 [2024-07-15 21:21:19.849367] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:46.508 [2024-07-15 21:21:19.849382] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.508 [2024-07-15 21:21:19.849390] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.508 [2024-07-15 21:21:19.849422] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.767 00:07:46.767 real 0m0.366s 00:07:46.767 user 0m0.187s 00:07:46.767 sys 0m0.077s 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.767 ************************************ 00:07:46.767 END TEST dd_invalid_json 00:07:46.767 ************************************ 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:46.767 00:07:46.767 real 0m3.316s 00:07:46.767 user 0m1.537s 00:07:46.767 sys 0m1.451s 00:07:46.767 ************************************ 00:07:46.767 END TEST spdk_dd_negative 00:07:46.767 ************************************ 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.767 21:21:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.767 21:21:20 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:46.767 ************************************ 00:07:46.767 END TEST spdk_dd 00:07:46.767 ************************************ 00:07:46.767 00:07:46.767 real 1m11.012s 00:07:46.767 user 0m44.446s 00:07:46.767 sys 0m30.424s 00:07:46.767 21:21:20 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.767 21:21:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.767 21:21:20 -- common/autotest_common.sh@1142 -- # return 0 00:07:46.767 21:21:20 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:46.767 21:21:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:46.767 21:21:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:46.767 21:21:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.767 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 21:21:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:47.026 21:21:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:47.026 21:21:20 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:47.026 21:21:20 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:47.026 21:21:20 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:47.026 21:21:20 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:47.026 21:21:20 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.026 21:21:20 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.026 21:21:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.026 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 ************************************ 00:07:47.026 START TEST nvmf_tcp 00:07:47.026 ************************************ 00:07:47.026 21:21:20 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:47.026 * Looking for test storage... 00:07:47.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.026 21:21:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.026 21:21:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.026 21:21:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.026 21:21:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.026 21:21:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.026 21:21:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.026 21:21:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:47.026 21:21:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:47.026 21:21:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.026 21:21:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:47.026 21:21:20 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.026 21:21:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.026 21:21:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.026 21:21:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.026 ************************************ 00:07:47.026 START TEST nvmf_host_management 00:07:47.026 ************************************ 00:07:47.026 21:21:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.286 * Looking for test storage... 00:07:47.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.286 21:21:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:47.287 Cannot find device "nvmf_init_br" 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:47.287 Cannot find device "nvmf_tgt_br" 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.287 Cannot find device "nvmf_tgt_br2" 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:47.287 Cannot find device "nvmf_init_br" 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:47.287 Cannot find device "nvmf_tgt_br" 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:47.287 Cannot find device "nvmf_tgt_br2" 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:47.287 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:47.287 Cannot find device "nvmf_br" 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:47.545 Cannot find device "nvmf_init_if" 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:47.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:47.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:47.545 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:47.805 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.805 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.805 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.805 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.805 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:47.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:07:47.805 00:07:47.805 --- 10.0.0.2 ping statistics --- 00:07:47.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.805 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:07:47.805 21:21:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:47.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:07:47.805 00:07:47.805 --- 10.0.0.3 ping statistics --- 00:07:47.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.805 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:47.805 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:47.805 00:07:47.805 --- 10.0.0.1 ping statistics --- 00:07:47.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.805 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:47.805 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64884 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64884 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64884 ']' 00:07:47.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.806 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:47.806 [2024-07-15 21:21:21.114138] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:47.806 [2024-07-15 21:21:21.114197] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.066 [2024-07-15 21:21:21.258091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.066 [2024-07-15 21:21:21.344229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.066 [2024-07-15 21:21:21.344275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.066 [2024-07-15 21:21:21.344285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.066 [2024-07-15 21:21:21.344293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.066 [2024-07-15 21:21:21.344300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.066 [2024-07-15 21:21:21.345264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.066 [2024-07-15 21:21:21.345361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.066 [2024-07-15 21:21:21.345532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.066 [2024-07-15 21:21:21.345533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.066 [2024-07-15 21:21:21.387778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.629 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.629 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:48.629 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.629 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.629 21:21:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 21:21:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.889 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:48.889 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.889 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.889 [2024-07-15 21:21:22.007982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.889 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.889 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:48.889 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.890 Malloc0 00:07:48.890 [2024-07-15 21:21:22.088724] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64938 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64938 /var/tmp/bdevperf.sock 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 64938 ']' 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:48.890 { 00:07:48.890 "params": { 00:07:48.890 "name": "Nvme$subsystem", 00:07:48.890 "trtype": "$TEST_TRANSPORT", 00:07:48.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.890 "adrfam": "ipv4", 00:07:48.890 "trsvcid": "$NVMF_PORT", 00:07:48.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.890 "hdgst": ${hdgst:-false}, 00:07:48.890 "ddgst": ${ddgst:-false} 00:07:48.890 }, 00:07:48.890 "method": "bdev_nvme_attach_controller" 00:07:48.890 } 00:07:48.890 EOF 00:07:48.890 )") 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:48.890 21:21:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:48.890 "params": { 00:07:48.890 "name": "Nvme0", 00:07:48.890 "trtype": "tcp", 00:07:48.890 "traddr": "10.0.0.2", 00:07:48.890 "adrfam": "ipv4", 00:07:48.890 "trsvcid": "4420", 00:07:48.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:48.890 "hdgst": false, 00:07:48.890 "ddgst": false 00:07:48.890 }, 00:07:48.890 "method": "bdev_nvme_attach_controller" 00:07:48.890 }' 00:07:48.890 [2024-07-15 21:21:22.208972] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:48.890 [2024-07-15 21:21:22.209027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64938 ] 00:07:49.163 [2024-07-15 21:21:22.348248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.163 [2024-07-15 21:21:22.440187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.163 [2024-07-15 21:21:22.490195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.420 Running I/O for 10 seconds... 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:49.983 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.984 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.984 [2024-07-15 21:21:23.116675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.116913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.117986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.117995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.984 [2024-07-15 21:21:23.118205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.984 [2024-07-15 21:21:23.118213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.985 [2024-07-15 21:21:23.118735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122bec0 is same with the state(5) to be set 00:07:49.985 [2024-07-15 21:21:23.118804] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x122bec0 was disconnected and freed. reset controller. 00:07:49.985 [2024-07-15 21:21:23.118904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.985 [2024-07-15 21:21:23.118916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.985 [2024-07-15 21:21:23.118934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.985 [2024-07-15 21:21:23.118951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:49.985 [2024-07-15 21:21:23.118969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.118977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223d50 is same with the state(5) to be set 00:07:49.985 [2024-07-15 21:21:23.119842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:49.985 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.985 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:49.985 task offset: 8192 on job bdev=Nvme0n1 fails 00:07:49.985 00:07:49.985 Latency(us) 00:07:49.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:49.985 Job: Nvme0n1 ended in about 0.52 seconds with error 00:07:49.985 Verification LBA range: start 0x0 length 0x400 00:07:49.985 Nvme0n1 : 0.52 2094.80 130.93 123.22 0.00 28200.40 2645.13 27583.02 00:07:49.985 =================================================================================================================== 00:07:49.985 Total : 2094.80 130.93 123.22 0.00 28200.40 2645.13 27583.02 00:07:49.985 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.985 [2024-07-15 21:21:23.121627] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.985 [2024-07-15 21:21:23.121656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1223d50 (9): Bad file descriptor 00:07:49.985 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.985 [2024-07-15 21:21:23.127405] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:49.985 [2024-07-15 21:21:23.127642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:49.985 [2024-07-15 21:21:23.127667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:49.985 [2024-07-15 21:21:23.127683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:49.985 [2024-07-15 21:21:23.127693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:49.985 [2024-07-15 21:21:23.127701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:49.985 [2024-07-15 21:21:23.127710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1223d50 00:07:49.985 [2024-07-15 21:21:23.127733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1223d50 (9): Bad file descriptor 00:07:49.985 [2024-07-15 21:21:23.127747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:49.986 [2024-07-15 21:21:23.127755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:49.986 [2024-07-15 21:21:23.127765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:49.986 [2024-07-15 21:21:23.127778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:49.986 21:21:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.986 21:21:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64938 00:07:50.914 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64938) - No such process 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:50.914 { 00:07:50.914 "params": { 00:07:50.914 "name": "Nvme$subsystem", 00:07:50.914 "trtype": "$TEST_TRANSPORT", 00:07:50.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.914 "adrfam": "ipv4", 00:07:50.914 "trsvcid": "$NVMF_PORT", 00:07:50.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.914 "hdgst": ${hdgst:-false}, 00:07:50.914 "ddgst": ${ddgst:-false} 00:07:50.914 }, 00:07:50.914 "method": "bdev_nvme_attach_controller" 00:07:50.914 } 00:07:50.914 EOF 00:07:50.914 )") 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:50.914 21:21:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:50.914 "params": { 00:07:50.914 "name": "Nvme0", 00:07:50.914 "trtype": "tcp", 00:07:50.914 "traddr": "10.0.0.2", 00:07:50.914 "adrfam": "ipv4", 00:07:50.914 "trsvcid": "4420", 00:07:50.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:50.914 "hdgst": false, 00:07:50.914 "ddgst": false 00:07:50.914 }, 00:07:50.914 "method": "bdev_nvme_attach_controller" 00:07:50.914 }' 00:07:50.914 [2024-07-15 21:21:24.196581] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:50.914 [2024-07-15 21:21:24.196641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64976 ] 00:07:51.170 [2024-07-15 21:21:24.340407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.170 [2024-07-15 21:21:24.424131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.170 [2024-07-15 21:21:24.473158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.427 Running I/O for 1 seconds... 00:07:52.357 00:07:52.357 Latency(us) 00:07:52.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.357 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.357 Verification LBA range: start 0x0 length 0x400 00:07:52.357 Nvme0n1 : 1.02 2191.85 136.99 0.00 0.00 28735.93 3079.40 27161.91 00:07:52.357 =================================================================================================================== 00:07:52.357 Total : 2191.85 136.99 0.00 0.00 28735.93 3079.40 27161.91 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.659 rmmod nvme_tcp 00:07:52.659 rmmod nvme_fabrics 00:07:52.659 rmmod nvme_keyring 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64884 ']' 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64884 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 64884 ']' 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 64884 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64884 00:07:52.659 killing process with pid 64884 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64884' 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 64884 00:07:52.659 21:21:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 64884 00:07:52.928 [2024-07-15 21:21:26.135633] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:52.928 00:07:52.928 real 0m5.863s 00:07:52.928 user 0m21.471s 00:07:52.928 sys 0m1.702s 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.928 ************************************ 00:07:52.928 END TEST nvmf_host_management 00:07:52.928 ************************************ 00:07:52.928 21:21:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.928 21:21:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:52.928 21:21:26 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:52.928 21:21:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:52.928 21:21:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.928 21:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.928 ************************************ 00:07:52.928 START TEST nvmf_lvol 00:07:52.928 ************************************ 00:07:52.928 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.187 * Looking for test storage... 00:07:53.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:53.187 Cannot find device "nvmf_tgt_br" 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.187 Cannot find device "nvmf_tgt_br2" 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:53.187 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:53.187 Cannot find device "nvmf_tgt_br" 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:53.446 Cannot find device "nvmf_tgt_br2" 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.446 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:07:53.705 00:07:53.705 --- 10.0.0.2 ping statistics --- 00:07:53.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.705 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:07:53.705 00:07:53.705 --- 10.0.0.3 ping statistics --- 00:07:53.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.705 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:53.705 00:07:53.705 --- 10.0.0.1 ping statistics --- 00:07:53.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.705 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65194 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65194 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65194 ']' 00:07:53.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.705 21:21:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.705 [2024-07-15 21:21:26.973851] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:07:53.705 [2024-07-15 21:21:26.973918] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.965 [2024-07-15 21:21:27.117970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.965 [2024-07-15 21:21:27.199786] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.965 [2024-07-15 21:21:27.199843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.965 [2024-07-15 21:21:27.199853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.965 [2024-07-15 21:21:27.199861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.965 [2024-07-15 21:21:27.199868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.965 [2024-07-15 21:21:27.200700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.965 [2024-07-15 21:21:27.200895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.965 [2024-07-15 21:21:27.200896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.965 [2024-07-15 21:21:27.241340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.532 21:21:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.532 21:21:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:54.532 21:21:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.532 21:21:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.532 21:21:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.532 21:21:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.532 21:21:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.791 [2024-07-15 21:21:28.023117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.791 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:55.050 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:55.050 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:55.308 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:55.308 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:55.566 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:55.566 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1d189c2f-0ff8-4f00-a05a-a66977411512 00:07:55.566 21:21:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1d189c2f-0ff8-4f00-a05a-a66977411512 lvol 20 00:07:55.824 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=528005fa-4a31-4255-adae-a3ec6e71fc8d 00:07:55.824 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.098 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 528005fa-4a31-4255-adae-a3ec6e71fc8d 00:07:56.098 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.356 [2024-07-15 21:21:29.638164] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.356 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.615 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65259 00:07:56.615 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:56.615 21:21:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:57.549 21:21:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 528005fa-4a31-4255-adae-a3ec6e71fc8d MY_SNAPSHOT 00:07:57.808 21:21:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d672ca89-9e58-44d4-bf86-7006ea7712b4 00:07:57.808 21:21:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 528005fa-4a31-4255-adae-a3ec6e71fc8d 30 00:07:58.066 21:21:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d672ca89-9e58-44d4-bf86-7006ea7712b4 MY_CLONE 00:07:58.325 21:21:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5488db93-8b0a-4f76-9150-06e02a984e7b 00:07:58.325 21:21:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5488db93-8b0a-4f76-9150-06e02a984e7b 00:07:58.584 21:21:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65259 00:08:08.593 Initializing NVMe Controllers 00:08:08.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:08.593 Controller IO queue size 128, less than required. 00:08:08.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:08.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:08.593 Initialization complete. Launching workers. 00:08:08.593 ======================================================== 00:08:08.593 Latency(us) 00:08:08.593 Device Information : IOPS MiB/s Average min max 00:08:08.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12556.80 49.05 10196.74 2072.50 47053.83 00:08:08.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12628.90 49.33 10140.24 3520.41 53336.10 00:08:08.593 ======================================================== 00:08:08.593 Total : 25185.70 98.38 10168.41 2072.50 53336.10 00:08:08.593 00:08:08.593 21:21:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.593 21:21:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 528005fa-4a31-4255-adae-a3ec6e71fc8d 00:08:08.593 21:21:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d189c2f-0ff8-4f00-a05a-a66977411512 00:08:08.593 21:21:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:08.593 21:21:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:08.593 21:21:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.594 rmmod nvme_tcp 00:08:08.594 rmmod nvme_fabrics 00:08:08.594 rmmod nvme_keyring 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65194 ']' 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65194 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65194 ']' 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65194 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65194 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.594 killing process with pid 65194 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65194' 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65194 00:08:08.594 21:21:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65194 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:08.594 00:08:08.594 real 0m14.877s 00:08:08.594 user 1m0.302s 00:08:08.594 sys 0m5.313s 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.594 ************************************ 00:08:08.594 END TEST nvmf_lvol 00:08:08.594 ************************************ 00:08:08.594 21:21:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:08.594 21:21:41 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.594 21:21:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.594 21:21:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.594 21:21:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.594 ************************************ 00:08:08.594 START TEST nvmf_lvs_grow 00:08:08.594 ************************************ 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.594 * Looking for test storage... 00:08:08.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:08.594 Cannot find device "nvmf_tgt_br" 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.594 Cannot find device "nvmf_tgt_br2" 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:08.594 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:08.595 Cannot find device "nvmf_tgt_br" 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:08.595 Cannot find device "nvmf_tgt_br2" 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:08.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:08:08.595 00:08:08.595 --- 10.0.0.2 ping statistics --- 00:08:08.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.595 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:08.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:08:08.595 00:08:08.595 --- 10.0.0.3 ping statistics --- 00:08:08.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.595 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:08:08.595 00:08:08.595 --- 10.0.0.1 ping statistics --- 00:08:08.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.595 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65584 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65584 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65584 ']' 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.595 21:21:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.595 [2024-07-15 21:21:41.890792] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:08.595 [2024-07-15 21:21:41.890873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.854 [2024-07-15 21:21:42.020398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.854 [2024-07-15 21:21:42.101492] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.854 [2024-07-15 21:21:42.101535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.854 [2024-07-15 21:21:42.101545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.854 [2024-07-15 21:21:42.101553] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.854 [2024-07-15 21:21:42.101560] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.854 [2024-07-15 21:21:42.101588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.854 [2024-07-15 21:21:42.141798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:09.422 21:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.422 21:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:09.422 21:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.422 21:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.422 21:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 21:21:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.681 21:21:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:09.681 [2024-07-15 21:21:42.975138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.681 21:21:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:09.681 21:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.681 21:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.681 21:21:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.681 ************************************ 00:08:09.681 START TEST lvs_grow_clean 00:08:09.681 ************************************ 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:09.681 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.940 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:09.940 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:10.198 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:10.198 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:10.198 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:10.456 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:10.456 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:10.456 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 66be38ae-7154-4402-aabb-61dfac7cf95f lvol 150 00:08:10.456 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=24506647-2083-4b2d-a7de-48a54f670663 00:08:10.456 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:10.456 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:10.714 [2024-07-15 21:21:43.981220] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:10.714 [2024-07-15 21:21:43.981283] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:10.714 true 00:08:10.714 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:10.714 21:21:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:10.972 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:10.972 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.231 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24506647-2083-4b2d-a7de-48a54f670663 00:08:11.231 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:11.490 [2024-07-15 21:21:44.740344] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.490 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65658 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65658 /var/tmp/bdevperf.sock 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65658 ']' 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.748 21:21:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:11.748 [2024-07-15 21:21:44.993963] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:11.748 [2024-07-15 21:21:44.994029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65658 ] 00:08:12.006 [2024-07-15 21:21:45.133240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.006 [2024-07-15 21:21:45.217982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.006 [2024-07-15 21:21:45.258767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.574 21:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.574 21:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:12.574 21:21:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:12.832 Nvme0n1 00:08:12.832 21:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:13.090 [ 00:08:13.090 { 00:08:13.090 "name": "Nvme0n1", 00:08:13.090 "aliases": [ 00:08:13.090 "24506647-2083-4b2d-a7de-48a54f670663" 00:08:13.090 ], 00:08:13.090 "product_name": "NVMe disk", 00:08:13.090 "block_size": 4096, 00:08:13.090 "num_blocks": 38912, 00:08:13.090 "uuid": "24506647-2083-4b2d-a7de-48a54f670663", 00:08:13.090 "assigned_rate_limits": { 00:08:13.090 "rw_ios_per_sec": 0, 00:08:13.090 "rw_mbytes_per_sec": 0, 00:08:13.090 "r_mbytes_per_sec": 0, 00:08:13.090 "w_mbytes_per_sec": 0 00:08:13.090 }, 00:08:13.090 "claimed": false, 00:08:13.090 "zoned": false, 00:08:13.090 "supported_io_types": { 00:08:13.090 "read": true, 00:08:13.090 "write": true, 00:08:13.090 "unmap": true, 00:08:13.090 "flush": true, 00:08:13.090 "reset": true, 00:08:13.090 "nvme_admin": true, 00:08:13.090 "nvme_io": true, 00:08:13.090 "nvme_io_md": false, 00:08:13.090 "write_zeroes": true, 00:08:13.090 "zcopy": false, 00:08:13.090 "get_zone_info": false, 00:08:13.090 "zone_management": false, 00:08:13.090 "zone_append": false, 00:08:13.090 "compare": true, 00:08:13.090 "compare_and_write": true, 00:08:13.090 "abort": true, 00:08:13.090 "seek_hole": false, 00:08:13.090 "seek_data": false, 00:08:13.090 "copy": true, 00:08:13.090 "nvme_iov_md": false 00:08:13.090 }, 00:08:13.090 "memory_domains": [ 00:08:13.090 { 00:08:13.090 "dma_device_id": "system", 00:08:13.090 "dma_device_type": 1 00:08:13.090 } 00:08:13.090 ], 00:08:13.090 "driver_specific": { 00:08:13.091 "nvme": [ 00:08:13.091 { 00:08:13.091 "trid": { 00:08:13.091 "trtype": "TCP", 00:08:13.091 "adrfam": "IPv4", 00:08:13.091 "traddr": "10.0.0.2", 00:08:13.091 "trsvcid": "4420", 00:08:13.091 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:13.091 }, 00:08:13.091 "ctrlr_data": { 00:08:13.091 "cntlid": 1, 00:08:13.091 "vendor_id": "0x8086", 00:08:13.091 "model_number": "SPDK bdev Controller", 00:08:13.091 "serial_number": "SPDK0", 00:08:13.091 "firmware_revision": "24.09", 00:08:13.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.091 "oacs": { 00:08:13.091 "security": 0, 00:08:13.091 "format": 0, 00:08:13.091 "firmware": 0, 00:08:13.091 "ns_manage": 0 00:08:13.091 }, 00:08:13.091 "multi_ctrlr": true, 00:08:13.091 "ana_reporting": false 00:08:13.091 }, 00:08:13.091 "vs": { 00:08:13.091 "nvme_version": "1.3" 00:08:13.091 }, 00:08:13.091 "ns_data": { 00:08:13.091 "id": 1, 00:08:13.091 "can_share": true 00:08:13.091 } 00:08:13.091 } 00:08:13.091 ], 00:08:13.091 "mp_policy": "active_passive" 00:08:13.091 } 00:08:13.091 } 00:08:13.091 ] 00:08:13.091 21:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65685 00:08:13.091 21:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:13.091 21:21:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.091 Running I/O for 10 seconds... 00:08:14.071 Latency(us) 00:08:14.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.071 Nvme0n1 : 1.00 10383.00 40.56 0.00 0.00 0.00 0.00 0.00 00:08:14.071 =================================================================================================================== 00:08:14.071 Total : 10383.00 40.56 0.00 0.00 0.00 0.00 0.00 00:08:14.071 00:08:15.006 21:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:15.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.264 Nvme0n1 : 2.00 10329.00 40.35 0.00 0.00 0.00 0.00 0.00 00:08:15.264 =================================================================================================================== 00:08:15.264 Total : 10329.00 40.35 0.00 0.00 0.00 0.00 0.00 00:08:15.264 00:08:15.264 true 00:08:15.264 21:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:15.264 21:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:15.521 21:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:15.521 21:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:15.521 21:21:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65685 00:08:16.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.088 Nvme0n1 : 3.00 10350.67 40.43 0.00 0.00 0.00 0.00 0.00 00:08:16.088 =================================================================================================================== 00:08:16.088 Total : 10350.67 40.43 0.00 0.00 0.00 0.00 0.00 00:08:16.088 00:08:17.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.022 Nvme0n1 : 4.00 10353.75 40.44 0.00 0.00 0.00 0.00 0.00 00:08:17.022 =================================================================================================================== 00:08:17.022 Total : 10353.75 40.44 0.00 0.00 0.00 0.00 0.00 00:08:17.022 00:08:18.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.396 Nvme0n1 : 5.00 10351.00 40.43 0.00 0.00 0.00 0.00 0.00 00:08:18.396 =================================================================================================================== 00:08:18.396 Total : 10351.00 40.43 0.00 0.00 0.00 0.00 0.00 00:08:18.396 00:08:19.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.346 Nvme0n1 : 6.00 10319.17 40.31 0.00 0.00 0.00 0.00 0.00 00:08:19.346 =================================================================================================================== 00:08:19.346 Total : 10319.17 40.31 0.00 0.00 0.00 0.00 0.00 00:08:19.346 00:08:20.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.282 Nvme0n1 : 7.00 10278.29 40.15 0.00 0.00 0.00 0.00 0.00 00:08:20.282 =================================================================================================================== 00:08:20.282 Total : 10278.29 40.15 0.00 0.00 0.00 0.00 0.00 00:08:20.282 00:08:21.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.217 Nvme0n1 : 8.00 10246.88 40.03 0.00 0.00 0.00 0.00 0.00 00:08:21.217 =================================================================================================================== 00:08:21.217 Total : 10246.88 40.03 0.00 0.00 0.00 0.00 0.00 00:08:21.217 00:08:22.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.152 Nvme0n1 : 9.00 10251.33 40.04 0.00 0.00 0.00 0.00 0.00 00:08:22.152 =================================================================================================================== 00:08:22.152 Total : 10251.33 40.04 0.00 0.00 0.00 0.00 0.00 00:08:22.152 00:08:23.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.087 Nvme0n1 : 10.00 10240.30 40.00 0.00 0.00 0.00 0.00 0.00 00:08:23.087 =================================================================================================================== 00:08:23.087 Total : 10240.30 40.00 0.00 0.00 0.00 0.00 0.00 00:08:23.087 00:08:23.087 00:08:23.087 Latency(us) 00:08:23.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.087 Nvme0n1 : 10.01 10243.04 40.01 0.00 0.00 12492.33 8106.46 36636.99 00:08:23.087 =================================================================================================================== 00:08:23.087 Total : 10243.04 40.01 0.00 0.00 12492.33 8106.46 36636.99 00:08:23.087 0 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65658 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65658 ']' 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65658 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65658 00:08:23.087 killing process with pid 65658 00:08:23.087 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.087 00:08:23.087 Latency(us) 00:08:23.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.087 =================================================================================================================== 00:08:23.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65658' 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65658 00:08:23.087 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65658 00:08:23.344 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.602 21:21:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:23.896 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:23.896 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:23.896 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:23.896 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:23.896 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.153 [2024-07-15 21:21:57.395584] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:24.153 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:24.411 request: 00:08:24.411 { 00:08:24.411 "uuid": "66be38ae-7154-4402-aabb-61dfac7cf95f", 00:08:24.411 "method": "bdev_lvol_get_lvstores", 00:08:24.411 "req_id": 1 00:08:24.411 } 00:08:24.411 Got JSON-RPC error response 00:08:24.411 response: 00:08:24.411 { 00:08:24.411 "code": -19, 00:08:24.411 "message": "No such device" 00:08:24.411 } 00:08:24.411 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:24.411 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:24.411 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:24.411 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:24.411 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.669 aio_bdev 00:08:24.669 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 24506647-2083-4b2d-a7de-48a54f670663 00:08:24.669 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=24506647-2083-4b2d-a7de-48a54f670663 00:08:24.669 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:24.669 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:24.669 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:24.669 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:24.669 21:21:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.670 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24506647-2083-4b2d-a7de-48a54f670663 -t 2000 00:08:24.928 [ 00:08:24.928 { 00:08:24.928 "name": "24506647-2083-4b2d-a7de-48a54f670663", 00:08:24.928 "aliases": [ 00:08:24.928 "lvs/lvol" 00:08:24.928 ], 00:08:24.928 "product_name": "Logical Volume", 00:08:24.928 "block_size": 4096, 00:08:24.928 "num_blocks": 38912, 00:08:24.928 "uuid": "24506647-2083-4b2d-a7de-48a54f670663", 00:08:24.928 "assigned_rate_limits": { 00:08:24.928 "rw_ios_per_sec": 0, 00:08:24.928 "rw_mbytes_per_sec": 0, 00:08:24.928 "r_mbytes_per_sec": 0, 00:08:24.928 "w_mbytes_per_sec": 0 00:08:24.928 }, 00:08:24.928 "claimed": false, 00:08:24.928 "zoned": false, 00:08:24.928 "supported_io_types": { 00:08:24.928 "read": true, 00:08:24.928 "write": true, 00:08:24.928 "unmap": true, 00:08:24.928 "flush": false, 00:08:24.928 "reset": true, 00:08:24.928 "nvme_admin": false, 00:08:24.928 "nvme_io": false, 00:08:24.928 "nvme_io_md": false, 00:08:24.928 "write_zeroes": true, 00:08:24.928 "zcopy": false, 00:08:24.928 "get_zone_info": false, 00:08:24.928 "zone_management": false, 00:08:24.928 "zone_append": false, 00:08:24.928 "compare": false, 00:08:24.928 "compare_and_write": false, 00:08:24.928 "abort": false, 00:08:24.928 "seek_hole": true, 00:08:24.928 "seek_data": true, 00:08:24.928 "copy": false, 00:08:24.928 "nvme_iov_md": false 00:08:24.928 }, 00:08:24.928 "driver_specific": { 00:08:24.928 "lvol": { 00:08:24.928 "lvol_store_uuid": "66be38ae-7154-4402-aabb-61dfac7cf95f", 00:08:24.928 "base_bdev": "aio_bdev", 00:08:24.928 "thin_provision": false, 00:08:24.928 "num_allocated_clusters": 38, 00:08:24.928 "snapshot": false, 00:08:24.928 "clone": false, 00:08:24.928 "esnap_clone": false 00:08:24.928 } 00:08:24.928 } 00:08:24.928 } 00:08:24.928 ] 00:08:24.928 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:24.928 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:24.928 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:25.187 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.187 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:25.187 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.445 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.445 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 24506647-2083-4b2d-a7de-48a54f670663 00:08:25.445 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 66be38ae-7154-4402-aabb-61dfac7cf95f 00:08:25.703 21:21:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.962 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.221 ************************************ 00:08:26.221 END TEST lvs_grow_clean 00:08:26.221 ************************************ 00:08:26.221 00:08:26.221 real 0m16.569s 00:08:26.221 user 0m14.658s 00:08:26.221 sys 0m2.990s 00:08:26.221 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.221 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.480 ************************************ 00:08:26.480 START TEST lvs_grow_dirty 00:08:26.480 ************************************ 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.480 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.739 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:26.739 21:21:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:26.998 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=20f5e994-0088-4c0f-8219-d8efa737e760 00:08:26.998 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:26.998 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:26.998 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:26.998 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:26.998 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 20f5e994-0088-4c0f-8219-d8efa737e760 lvol 150 00:08:27.257 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=61c4e0f6-326d-44ea-a7af-f6bb0c397151 00:08:27.257 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:27.257 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.517 [2024-07-15 21:22:00.669165] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.517 [2024-07-15 21:22:00.669223] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.517 true 00:08:27.517 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:27.517 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:27.517 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:27.517 21:22:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.776 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61c4e0f6-326d-44ea-a7af-f6bb0c397151 00:08:28.035 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.294 [2024-07-15 21:22:01.424345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65916 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65916 /var/tmp/bdevperf.sock 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 65916 ']' 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.294 21:22:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.553 [2024-07-15 21:22:01.698615] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:28.553 [2024-07-15 21:22:01.698908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65916 ] 00:08:28.553 [2024-07-15 21:22:01.850149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.811 [2024-07-15 21:22:01.934665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.811 [2024-07-15 21:22:01.975717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.375 21:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.375 21:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:29.375 21:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.634 Nvme0n1 00:08:29.634 21:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.634 [ 00:08:29.634 { 00:08:29.634 "name": "Nvme0n1", 00:08:29.634 "aliases": [ 00:08:29.634 "61c4e0f6-326d-44ea-a7af-f6bb0c397151" 00:08:29.634 ], 00:08:29.634 "product_name": "NVMe disk", 00:08:29.634 "block_size": 4096, 00:08:29.634 "num_blocks": 38912, 00:08:29.634 "uuid": "61c4e0f6-326d-44ea-a7af-f6bb0c397151", 00:08:29.634 "assigned_rate_limits": { 00:08:29.634 "rw_ios_per_sec": 0, 00:08:29.634 "rw_mbytes_per_sec": 0, 00:08:29.634 "r_mbytes_per_sec": 0, 00:08:29.634 "w_mbytes_per_sec": 0 00:08:29.634 }, 00:08:29.634 "claimed": false, 00:08:29.634 "zoned": false, 00:08:29.634 "supported_io_types": { 00:08:29.634 "read": true, 00:08:29.634 "write": true, 00:08:29.634 "unmap": true, 00:08:29.634 "flush": true, 00:08:29.634 "reset": true, 00:08:29.634 "nvme_admin": true, 00:08:29.634 "nvme_io": true, 00:08:29.634 "nvme_io_md": false, 00:08:29.634 "write_zeroes": true, 00:08:29.634 "zcopy": false, 00:08:29.634 "get_zone_info": false, 00:08:29.634 "zone_management": false, 00:08:29.634 "zone_append": false, 00:08:29.634 "compare": true, 00:08:29.634 "compare_and_write": true, 00:08:29.634 "abort": true, 00:08:29.634 "seek_hole": false, 00:08:29.634 "seek_data": false, 00:08:29.634 "copy": true, 00:08:29.634 "nvme_iov_md": false 00:08:29.634 }, 00:08:29.634 "memory_domains": [ 00:08:29.634 { 00:08:29.634 "dma_device_id": "system", 00:08:29.634 "dma_device_type": 1 00:08:29.634 } 00:08:29.634 ], 00:08:29.634 "driver_specific": { 00:08:29.634 "nvme": [ 00:08:29.634 { 00:08:29.634 "trid": { 00:08:29.634 "trtype": "TCP", 00:08:29.634 "adrfam": "IPv4", 00:08:29.634 "traddr": "10.0.0.2", 00:08:29.634 "trsvcid": "4420", 00:08:29.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.634 }, 00:08:29.634 "ctrlr_data": { 00:08:29.634 "cntlid": 1, 00:08:29.634 "vendor_id": "0x8086", 00:08:29.634 "model_number": "SPDK bdev Controller", 00:08:29.634 "serial_number": "SPDK0", 00:08:29.634 "firmware_revision": "24.09", 00:08:29.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.634 "oacs": { 00:08:29.634 "security": 0, 00:08:29.634 "format": 0, 00:08:29.634 "firmware": 0, 00:08:29.634 "ns_manage": 0 00:08:29.634 }, 00:08:29.634 "multi_ctrlr": true, 00:08:29.634 "ana_reporting": false 00:08:29.634 }, 00:08:29.634 "vs": { 00:08:29.634 "nvme_version": "1.3" 00:08:29.634 }, 00:08:29.634 "ns_data": { 00:08:29.634 "id": 1, 00:08:29.634 "can_share": true 00:08:29.634 } 00:08:29.634 } 00:08:29.634 ], 00:08:29.634 "mp_policy": "active_passive" 00:08:29.634 } 00:08:29.634 } 00:08:29.634 ] 00:08:29.634 21:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65934 00:08:29.634 21:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.634 21:22:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.892 Running I/O for 10 seconds... 00:08:30.824 Latency(us) 00:08:30.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.824 Nvme0n1 : 1.00 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:08:30.824 =================================================================================================================== 00:08:30.824 Total : 10541.00 41.18 0.00 0.00 0.00 0.00 0.00 00:08:30.824 00:08:31.759 21:22:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:31.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.759 Nvme0n1 : 2.00 10600.50 41.41 0.00 0.00 0.00 0.00 0.00 00:08:31.759 =================================================================================================================== 00:08:31.759 Total : 10600.50 41.41 0.00 0.00 0.00 0.00 0.00 00:08:31.759 00:08:32.016 true 00:08:32.016 21:22:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:32.016 21:22:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:32.273 21:22:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:32.273 21:22:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:32.273 21:22:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65934 00:08:32.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.839 Nvme0n1 : 3.00 10575.00 41.31 0.00 0.00 0.00 0.00 0.00 00:08:32.839 =================================================================================================================== 00:08:32.839 Total : 10575.00 41.31 0.00 0.00 0.00 0.00 0.00 00:08:32.839 00:08:33.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.772 Nvme0n1 : 4.00 10503.00 41.03 0.00 0.00 0.00 0.00 0.00 00:08:33.772 =================================================================================================================== 00:08:33.772 Total : 10503.00 41.03 0.00 0.00 0.00 0.00 0.00 00:08:33.772 00:08:34.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.706 Nvme0n1 : 5.00 10459.20 40.86 0.00 0.00 0.00 0.00 0.00 00:08:34.706 =================================================================================================================== 00:08:34.706 Total : 10459.20 40.86 0.00 0.00 0.00 0.00 0.00 00:08:34.706 00:08:36.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.082 Nvme0n1 : 6.00 10399.33 40.62 0.00 0.00 0.00 0.00 0.00 00:08:36.082 =================================================================================================================== 00:08:36.082 Total : 10399.33 40.62 0.00 0.00 0.00 0.00 0.00 00:08:36.082 00:08:37.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.018 Nvme0n1 : 7.00 10196.00 39.83 0.00 0.00 0.00 0.00 0.00 00:08:37.018 =================================================================================================================== 00:08:37.018 Total : 10196.00 39.83 0.00 0.00 0.00 0.00 0.00 00:08:37.018 00:08:37.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.955 Nvme0n1 : 8.00 9890.12 38.63 0.00 0.00 0.00 0.00 0.00 00:08:37.955 =================================================================================================================== 00:08:37.955 Total : 9890.12 38.63 0.00 0.00 0.00 0.00 0.00 00:08:37.955 00:08:38.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.891 Nvme0n1 : 9.00 9891.89 38.64 0.00 0.00 0.00 0.00 0.00 00:08:38.891 =================================================================================================================== 00:08:38.891 Total : 9891.89 38.64 0.00 0.00 0.00 0.00 0.00 00:08:38.891 00:08:39.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.828 Nvme0n1 : 10.00 9831.60 38.40 0.00 0.00 0.00 0.00 0.00 00:08:39.828 =================================================================================================================== 00:08:39.828 Total : 9831.60 38.40 0.00 0.00 0.00 0.00 0.00 00:08:39.828 00:08:39.828 00:08:39.828 Latency(us) 00:08:39.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.828 Nvme0n1 : 10.01 9838.66 38.43 0.00 0.00 13005.11 4579.62 326785.13 00:08:39.828 =================================================================================================================== 00:08:39.828 Total : 9838.66 38.43 0.00 0.00 13005.11 4579.62 326785.13 00:08:39.828 0 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65916 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 65916 ']' 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 65916 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65916 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65916' 00:08:39.828 killing process with pid 65916 00:08:39.828 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.828 00:08:39.828 Latency(us) 00:08:39.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.828 =================================================================================================================== 00:08:39.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 65916 00:08:39.828 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 65916 00:08:40.087 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.345 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.345 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:40.345 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65584 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65584 00:08:40.603 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65584 Killed "${NVMF_APP[@]}" "$@" 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.603 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66068 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66068 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66068 ']' 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.862 21:22:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.862 [2024-07-15 21:22:14.025883] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:40.862 [2024-07-15 21:22:14.025950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.862 [2024-07-15 21:22:14.164238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.121 [2024-07-15 21:22:14.248260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.121 [2024-07-15 21:22:14.248311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.121 [2024-07-15 21:22:14.248321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.121 [2024-07-15 21:22:14.248328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.121 [2024-07-15 21:22:14.248335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.121 [2024-07-15 21:22:14.248360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.121 [2024-07-15 21:22:14.288803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:41.689 21:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.689 21:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:41.689 21:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.689 21:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.689 21:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.689 21:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.689 21:22:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.947 [2024-07-15 21:22:15.094572] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:41.947 [2024-07-15 21:22:15.095075] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:41.947 [2024-07-15 21:22:15.095376] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 61c4e0f6-326d-44ea-a7af-f6bb0c397151 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=61c4e0f6-326d-44ea-a7af-f6bb0c397151 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:41.947 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.216 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61c4e0f6-326d-44ea-a7af-f6bb0c397151 -t 2000 00:08:42.216 [ 00:08:42.216 { 00:08:42.216 "name": "61c4e0f6-326d-44ea-a7af-f6bb0c397151", 00:08:42.216 "aliases": [ 00:08:42.216 "lvs/lvol" 00:08:42.216 ], 00:08:42.216 "product_name": "Logical Volume", 00:08:42.216 "block_size": 4096, 00:08:42.216 "num_blocks": 38912, 00:08:42.216 "uuid": "61c4e0f6-326d-44ea-a7af-f6bb0c397151", 00:08:42.216 "assigned_rate_limits": { 00:08:42.216 "rw_ios_per_sec": 0, 00:08:42.216 "rw_mbytes_per_sec": 0, 00:08:42.216 "r_mbytes_per_sec": 0, 00:08:42.216 "w_mbytes_per_sec": 0 00:08:42.216 }, 00:08:42.216 "claimed": false, 00:08:42.216 "zoned": false, 00:08:42.216 "supported_io_types": { 00:08:42.216 "read": true, 00:08:42.216 "write": true, 00:08:42.216 "unmap": true, 00:08:42.216 "flush": false, 00:08:42.216 "reset": true, 00:08:42.216 "nvme_admin": false, 00:08:42.216 "nvme_io": false, 00:08:42.216 "nvme_io_md": false, 00:08:42.216 "write_zeroes": true, 00:08:42.216 "zcopy": false, 00:08:42.216 "get_zone_info": false, 00:08:42.216 "zone_management": false, 00:08:42.216 "zone_append": false, 00:08:42.216 "compare": false, 00:08:42.216 "compare_and_write": false, 00:08:42.216 "abort": false, 00:08:42.216 "seek_hole": true, 00:08:42.216 "seek_data": true, 00:08:42.216 "copy": false, 00:08:42.216 "nvme_iov_md": false 00:08:42.216 }, 00:08:42.216 "driver_specific": { 00:08:42.216 "lvol": { 00:08:42.216 "lvol_store_uuid": "20f5e994-0088-4c0f-8219-d8efa737e760", 00:08:42.216 "base_bdev": "aio_bdev", 00:08:42.216 "thin_provision": false, 00:08:42.216 "num_allocated_clusters": 38, 00:08:42.216 "snapshot": false, 00:08:42.216 "clone": false, 00:08:42.216 "esnap_clone": false 00:08:42.216 } 00:08:42.216 } 00:08:42.216 } 00:08:42.216 ] 00:08:42.216 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:42.216 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:42.216 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:42.493 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:42.493 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:42.493 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:42.751 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:42.751 21:22:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.751 [2024-07-15 21:22:16.114598] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:43.010 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:43.010 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:43.010 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:43.010 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.010 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:43.011 request: 00:08:43.011 { 00:08:43.011 "uuid": "20f5e994-0088-4c0f-8219-d8efa737e760", 00:08:43.011 "method": "bdev_lvol_get_lvstores", 00:08:43.011 "req_id": 1 00:08:43.011 } 00:08:43.011 Got JSON-RPC error response 00:08:43.011 response: 00:08:43.011 { 00:08:43.011 "code": -19, 00:08:43.011 "message": "No such device" 00:08:43.011 } 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.011 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.270 aio_bdev 00:08:43.270 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61c4e0f6-326d-44ea-a7af-f6bb0c397151 00:08:43.270 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=61c4e0f6-326d-44ea-a7af-f6bb0c397151 00:08:43.270 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:43.270 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:43.270 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:43.270 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:43.270 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:43.529 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61c4e0f6-326d-44ea-a7af-f6bb0c397151 -t 2000 00:08:43.788 [ 00:08:43.788 { 00:08:43.788 "name": "61c4e0f6-326d-44ea-a7af-f6bb0c397151", 00:08:43.788 "aliases": [ 00:08:43.788 "lvs/lvol" 00:08:43.788 ], 00:08:43.788 "product_name": "Logical Volume", 00:08:43.788 "block_size": 4096, 00:08:43.788 "num_blocks": 38912, 00:08:43.788 "uuid": "61c4e0f6-326d-44ea-a7af-f6bb0c397151", 00:08:43.788 "assigned_rate_limits": { 00:08:43.788 "rw_ios_per_sec": 0, 00:08:43.788 "rw_mbytes_per_sec": 0, 00:08:43.788 "r_mbytes_per_sec": 0, 00:08:43.788 "w_mbytes_per_sec": 0 00:08:43.788 }, 00:08:43.788 "claimed": false, 00:08:43.788 "zoned": false, 00:08:43.788 "supported_io_types": { 00:08:43.788 "read": true, 00:08:43.788 "write": true, 00:08:43.788 "unmap": true, 00:08:43.788 "flush": false, 00:08:43.788 "reset": true, 00:08:43.788 "nvme_admin": false, 00:08:43.788 "nvme_io": false, 00:08:43.788 "nvme_io_md": false, 00:08:43.788 "write_zeroes": true, 00:08:43.788 "zcopy": false, 00:08:43.788 "get_zone_info": false, 00:08:43.788 "zone_management": false, 00:08:43.788 "zone_append": false, 00:08:43.788 "compare": false, 00:08:43.788 "compare_and_write": false, 00:08:43.788 "abort": false, 00:08:43.788 "seek_hole": true, 00:08:43.788 "seek_data": true, 00:08:43.788 "copy": false, 00:08:43.788 "nvme_iov_md": false 00:08:43.788 }, 00:08:43.788 "driver_specific": { 00:08:43.788 "lvol": { 00:08:43.788 "lvol_store_uuid": "20f5e994-0088-4c0f-8219-d8efa737e760", 00:08:43.788 "base_bdev": "aio_bdev", 00:08:43.788 "thin_provision": false, 00:08:43.788 "num_allocated_clusters": 38, 00:08:43.788 "snapshot": false, 00:08:43.788 "clone": false, 00:08:43.788 "esnap_clone": false 00:08:43.788 } 00:08:43.788 } 00:08:43.788 } 00:08:43.788 ] 00:08:43.788 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:43.788 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:43.788 21:22:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:43.788 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:43.788 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:43.788 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:44.048 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:44.048 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 61c4e0f6-326d-44ea-a7af-f6bb0c397151 00:08:44.307 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20f5e994-0088-4c0f-8219-d8efa737e760 00:08:44.565 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:44.565 21:22:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:45.131 ************************************ 00:08:45.131 END TEST lvs_grow_dirty 00:08:45.131 ************************************ 00:08:45.131 00:08:45.131 real 0m18.655s 00:08:45.131 user 0m37.359s 00:08:45.131 sys 0m7.855s 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:45.131 nvmf_trace.0 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.131 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.389 rmmod nvme_tcp 00:08:45.389 rmmod nvme_fabrics 00:08:45.389 rmmod nvme_keyring 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66068 ']' 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66068 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66068 ']' 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66068 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66068 00:08:45.389 killing process with pid 66068 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66068' 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66068 00:08:45.389 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66068 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:45.679 ************************************ 00:08:45.679 END TEST nvmf_lvs_grow 00:08:45.679 ************************************ 00:08:45.679 00:08:45.679 real 0m37.661s 00:08:45.679 user 0m57.344s 00:08:45.679 sys 0m11.700s 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.679 21:22:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.679 21:22:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:45.679 21:22:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.679 21:22:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.679 21:22:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.679 21:22:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.679 ************************************ 00:08:45.679 START TEST nvmf_bdev_io_wait 00:08:45.679 ************************************ 00:08:45.679 21:22:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:45.939 * Looking for test storage... 00:08:45.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.939 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:45.940 Cannot find device "nvmf_tgt_br" 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.940 Cannot find device "nvmf_tgt_br2" 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:45.940 Cannot find device "nvmf_tgt_br" 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:45.940 Cannot find device "nvmf_tgt_br2" 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:45.940 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:46.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:46.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:46.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:08:46.199 00:08:46.199 --- 10.0.0.2 ping statistics --- 00:08:46.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.199 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:46.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:46.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:08:46.199 00:08:46.199 --- 10.0.0.3 ping statistics --- 00:08:46.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.199 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:46.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:08:46.199 00:08:46.199 --- 10.0.0.1 ping statistics --- 00:08:46.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.199 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.199 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66369 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66369 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66369 ']' 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.459 21:22:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:46.459 [2024-07-15 21:22:19.621271] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:46.459 [2024-07-15 21:22:19.621348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.459 [2024-07-15 21:22:19.765123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.718 [2024-07-15 21:22:19.860410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.718 [2024-07-15 21:22:19.860649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.718 [2024-07-15 21:22:19.860790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.718 [2024-07-15 21:22:19.860801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.718 [2024-07-15 21:22:19.860808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.718 [2024-07-15 21:22:19.860968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.718 [2024-07-15 21:22:19.861724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.718 [2024-07-15 21:22:19.862083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.718 [2024-07-15 21:22:19.862083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.285 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.285 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:47.285 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.285 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.285 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.285 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.286 [2024-07-15 21:22:20.574522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.286 [2024-07-15 21:22:20.589861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.286 Malloc0 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.286 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.545 [2024-07-15 21:22:20.656304] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66407 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66409 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.545 { 00:08:47.545 "params": { 00:08:47.545 "name": "Nvme$subsystem", 00:08:47.545 "trtype": "$TEST_TRANSPORT", 00:08:47.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.545 "adrfam": "ipv4", 00:08:47.545 "trsvcid": "$NVMF_PORT", 00:08:47.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.545 "hdgst": ${hdgst:-false}, 00:08:47.545 "ddgst": ${ddgst:-false} 00:08:47.545 }, 00:08:47.545 "method": "bdev_nvme_attach_controller" 00:08:47.545 } 00:08:47.545 EOF 00:08:47.545 )") 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66411 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.545 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.546 { 00:08:47.546 "params": { 00:08:47.546 "name": "Nvme$subsystem", 00:08:47.546 "trtype": "$TEST_TRANSPORT", 00:08:47.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.546 "adrfam": "ipv4", 00:08:47.546 "trsvcid": "$NVMF_PORT", 00:08:47.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.546 "hdgst": ${hdgst:-false}, 00:08:47.546 "ddgst": ${ddgst:-false} 00:08:47.546 }, 00:08:47.546 "method": "bdev_nvme_attach_controller" 00:08:47.546 } 00:08:47.546 EOF 00:08:47.546 )") 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66414 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.546 { 00:08:47.546 "params": { 00:08:47.546 "name": "Nvme$subsystem", 00:08:47.546 "trtype": "$TEST_TRANSPORT", 00:08:47.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.546 "adrfam": "ipv4", 00:08:47.546 "trsvcid": "$NVMF_PORT", 00:08:47.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.546 "hdgst": ${hdgst:-false}, 00:08:47.546 "ddgst": ${ddgst:-false} 00:08:47.546 }, 00:08:47.546 "method": "bdev_nvme_attach_controller" 00:08:47.546 } 00:08:47.546 EOF 00:08:47.546 )") 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:47.546 { 00:08:47.546 "params": { 00:08:47.546 "name": "Nvme$subsystem", 00:08:47.546 "trtype": "$TEST_TRANSPORT", 00:08:47.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.546 "adrfam": "ipv4", 00:08:47.546 "trsvcid": "$NVMF_PORT", 00:08:47.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.546 "hdgst": ${hdgst:-false}, 00:08:47.546 "ddgst": ${ddgst:-false} 00:08:47.546 }, 00:08:47.546 "method": "bdev_nvme_attach_controller" 00:08:47.546 } 00:08:47.546 EOF 00:08:47.546 )") 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.546 "params": { 00:08:47.546 "name": "Nvme1", 00:08:47.546 "trtype": "tcp", 00:08:47.546 "traddr": "10.0.0.2", 00:08:47.546 "adrfam": "ipv4", 00:08:47.546 "trsvcid": "4420", 00:08:47.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.546 "hdgst": false, 00:08:47.546 "ddgst": false 00:08:47.546 }, 00:08:47.546 "method": "bdev_nvme_attach_controller" 00:08:47.546 }' 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.546 "params": { 00:08:47.546 "name": "Nvme1", 00:08:47.546 "trtype": "tcp", 00:08:47.546 "traddr": "10.0.0.2", 00:08:47.546 "adrfam": "ipv4", 00:08:47.546 "trsvcid": "4420", 00:08:47.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.546 "hdgst": false, 00:08:47.546 "ddgst": false 00:08:47.546 }, 00:08:47.546 "method": "bdev_nvme_attach_controller" 00:08:47.546 }' 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.546 "params": { 00:08:47.546 "name": "Nvme1", 00:08:47.546 "trtype": "tcp", 00:08:47.546 "traddr": "10.0.0.2", 00:08:47.546 "adrfam": "ipv4", 00:08:47.546 "trsvcid": "4420", 00:08:47.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.546 "hdgst": false, 00:08:47.546 "ddgst": false 00:08:47.546 }, 00:08:47.546 "method": "bdev_nvme_attach_controller" 00:08:47.546 }' 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:47.546 "params": { 00:08:47.546 "name": "Nvme1", 00:08:47.546 "trtype": "tcp", 00:08:47.546 "traddr": "10.0.0.2", 00:08:47.546 "adrfam": "ipv4", 00:08:47.546 "trsvcid": "4420", 00:08:47.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.546 "hdgst": false, 00:08:47.546 "ddgst": false 00:08:47.546 }, 00:08:47.546 "method": "bdev_nvme_attach_controller" 00:08:47.546 }' 00:08:47.546 [2024-07-15 21:22:20.716467] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:47.546 [2024-07-15 21:22:20.716528] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:47.546 [2024-07-15 21:22:20.718035] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:47.546 [2024-07-15 21:22:20.718083] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:47.546 21:22:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66407 00:08:47.546 [2024-07-15 21:22:20.735205] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:47.546 [2024-07-15 21:22:20.735366] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:47.546 [2024-07-15 21:22:20.739140] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:47.546 [2024-07-15 21:22:20.739199] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:47.546 [2024-07-15 21:22:20.906368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.806 [2024-07-15 21:22:20.971884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.806 [2024-07-15 21:22:20.989945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:47.806 [2024-07-15 21:22:21.027661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.806 [2024-07-15 21:22:21.035946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.806 [2024-07-15 21:22:21.053477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:47.806 [2024-07-15 21:22:21.091706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.806 [2024-07-15 21:22:21.099922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.806 [2024-07-15 21:22:21.115209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:47.806 Running I/O for 1 seconds... 00:08:47.806 [2024-07-15 21:22:21.152413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:48.065 [2024-07-15 21:22:21.176226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:48.065 Running I/O for 1 seconds... 00:08:48.065 [2024-07-15 21:22:21.213104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:48.065 Running I/O for 1 seconds... 00:08:48.065 Running I/O for 1 seconds... 00:08:49.000 00:08:49.000 Latency(us) 00:08:49.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.000 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:49.000 Nvme1n1 : 1.00 220629.21 861.83 0.00 0.00 578.10 296.10 1217.29 00:08:49.000 =================================================================================================================== 00:08:49.000 Total : 220629.21 861.83 0.00 0.00 578.10 296.10 1217.29 00:08:49.000 00:08:49.000 Latency(us) 00:08:49.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.000 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:49.000 Nvme1n1 : 1.01 9323.45 36.42 0.00 0.00 13664.80 7316.87 19266.00 00:08:49.000 =================================================================================================================== 00:08:49.000 Total : 9323.45 36.42 0.00 0.00 13664.80 7316.87 19266.00 00:08:49.000 00:08:49.000 Latency(us) 00:08:49.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.000 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:49.000 Nvme1n1 : 1.01 6744.26 26.34 0.00 0.00 18861.28 10738.43 35794.76 00:08:49.000 =================================================================================================================== 00:08:49.000 Total : 6744.26 26.34 0.00 0.00 18861.28 10738.43 35794.76 00:08:49.000 00:08:49.000 Latency(us) 00:08:49.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.000 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:49.000 Nvme1n1 : 1.01 6464.54 25.25 0.00 0.00 19703.88 9317.17 27793.58 00:08:49.000 =================================================================================================================== 00:08:49.000 Total : 6464.54 25.25 0.00 0.00 19703.88 9317.17 27793.58 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66409 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66411 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66414 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.260 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.260 rmmod nvme_tcp 00:08:49.260 rmmod nvme_fabrics 00:08:49.520 rmmod nvme_keyring 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66369 ']' 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66369 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66369 ']' 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66369 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66369 00:08:49.520 killing process with pid 66369 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66369' 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66369 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66369 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.520 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.779 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:49.779 00:08:49.779 real 0m3.963s 00:08:49.779 user 0m16.646s 00:08:49.779 sys 0m2.247s 00:08:49.779 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.779 21:22:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:49.779 ************************************ 00:08:49.779 END TEST nvmf_bdev_io_wait 00:08:49.779 ************************************ 00:08:49.779 21:22:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:49.779 21:22:22 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:49.779 21:22:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:49.779 21:22:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.779 21:22:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:49.779 ************************************ 00:08:49.779 START TEST nvmf_queue_depth 00:08:49.779 ************************************ 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:49.779 * Looking for test storage... 00:08:49.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.779 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:50.045 Cannot find device "nvmf_tgt_br" 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.045 Cannot find device "nvmf_tgt_br2" 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:50.045 Cannot find device "nvmf_tgt_br" 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:50.045 Cannot find device "nvmf_tgt_br2" 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.045 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.304 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:50.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:08:50.305 00:08:50.305 --- 10.0.0.2 ping statistics --- 00:08:50.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.305 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:50.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:50.305 00:08:50.305 --- 10.0.0.3 ping statistics --- 00:08:50.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.305 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:50.305 00:08:50.305 --- 10.0.0.1 ping statistics --- 00:08:50.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.305 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66645 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66645 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66645 ']' 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.305 21:22:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:50.564 [2024-07-15 21:22:23.684984] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:50.564 [2024-07-15 21:22:23.685091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.564 [2024-07-15 21:22:23.836135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.823 [2024-07-15 21:22:23.932530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.823 [2024-07-15 21:22:23.932571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.823 [2024-07-15 21:22:23.932580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.823 [2024-07-15 21:22:23.932589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.823 [2024-07-15 21:22:23.932595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.823 [2024-07-15 21:22:23.932621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.823 [2024-07-15 21:22:23.973838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.387 [2024-07-15 21:22:24.572331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.387 Malloc0 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.387 [2024-07-15 21:22:24.634296] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66677 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66677 /var/tmp/bdevperf.sock 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 66677 ']' 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:51.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.387 21:22:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.387 [2024-07-15 21:22:24.692013] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:08:51.387 [2024-07-15 21:22:24.692073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66677 ] 00:08:51.645 [2024-07-15 21:22:24.832839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.645 [2024-07-15 21:22:24.929953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.645 [2024-07-15 21:22:24.971562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:52.212 21:22:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.212 21:22:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:52.212 21:22:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:52.212 21:22:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.212 21:22:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.500 NVMe0n1 00:08:52.500 21:22:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.500 21:22:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.500 Running I/O for 10 seconds... 00:09:02.485 00:09:02.486 Latency(us) 00:09:02.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.486 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:02.486 Verification LBA range: start 0x0 length 0x4000 00:09:02.486 NVMe0n1 : 10.08 10159.67 39.69 0.00 0.00 100419.19 19160.73 70326.18 00:09:02.486 =================================================================================================================== 00:09:02.486 Total : 10159.67 39.69 0.00 0.00 100419.19 19160.73 70326.18 00:09:02.486 0 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66677 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66677 ']' 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66677 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66677 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66677' 00:09:02.486 killing process with pid 66677 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66677 00:09:02.486 Received shutdown signal, test time was about 10.000000 seconds 00:09:02.486 00:09:02.486 Latency(us) 00:09:02.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.486 =================================================================================================================== 00:09:02.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:02.486 21:22:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66677 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.743 rmmod nvme_tcp 00:09:02.743 rmmod nvme_fabrics 00:09:02.743 rmmod nvme_keyring 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66645 ']' 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66645 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 66645 ']' 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 66645 00:09:02.743 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66645 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:03.001 killing process with pid 66645 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66645' 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 66645 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 66645 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.001 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.259 21:22:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:03.259 00:09:03.259 real 0m13.384s 00:09:03.259 user 0m22.532s 00:09:03.259 sys 0m2.576s 00:09:03.259 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.259 21:22:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.259 ************************************ 00:09:03.259 END TEST nvmf_queue_depth 00:09:03.259 ************************************ 00:09:03.259 21:22:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:03.260 21:22:36 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:03.260 21:22:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:03.260 21:22:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.260 21:22:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.260 ************************************ 00:09:03.260 START TEST nvmf_target_multipath 00:09:03.260 ************************************ 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:03.260 * Looking for test storage... 00:09:03.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:03.260 Cannot find device "nvmf_tgt_br" 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.260 Cannot find device "nvmf_tgt_br2" 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:03.260 Cannot find device "nvmf_tgt_br" 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:03.260 Cannot find device "nvmf_tgt_br2" 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:03.260 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:03.519 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:03.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:03.520 00:09:03.520 --- 10.0.0.2 ping statistics --- 00:09:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.520 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:03.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:03.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:09:03.520 00:09:03.520 --- 10.0.0.3 ping statistics --- 00:09:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.520 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:03.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:03.520 00:09:03.520 --- 10.0.0.1 ping statistics --- 00:09:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.520 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66993 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66993 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 66993 ']' 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:03.520 21:22:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:03.779 [2024-07-15 21:22:36.890455] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:03.779 [2024-07-15 21:22:36.890558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.779 [2024-07-15 21:22:37.022042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.779 [2024-07-15 21:22:37.124574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.779 [2024-07-15 21:22:37.124628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.779 [2024-07-15 21:22:37.124642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.779 [2024-07-15 21:22:37.124653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.779 [2024-07-15 21:22:37.124663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.779 [2024-07-15 21:22:37.124876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.779 [2024-07-15 21:22:37.125200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.779 [2024-07-15 21:22:37.125281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.779 [2024-07-15 21:22:37.125346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.038 [2024-07-15 21:22:37.168284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:04.601 21:22:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.601 21:22:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:04.601 21:22:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.601 21:22:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.601 21:22:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:04.601 21:22:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.601 21:22:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:04.878 [2024-07-15 21:22:38.082003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.878 21:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:05.135 Malloc0 00:09:05.135 21:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:05.394 21:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.652 21:22:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.910 [2024-07-15 21:22:39.123508] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.910 21:22:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:06.169 [2024-07-15 21:22:39.343348] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:06.169 21:22:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:06.169 21:22:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:06.427 21:22:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.427 21:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.427 21:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.427 21:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:06.427 21:22:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67088 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:08.329 21:22:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:08.588 [global] 00:09:08.589 thread=1 00:09:08.589 invalidate=1 00:09:08.589 rw=randrw 00:09:08.589 time_based=1 00:09:08.589 runtime=6 00:09:08.589 ioengine=libaio 00:09:08.589 direct=1 00:09:08.589 bs=4096 00:09:08.589 iodepth=128 00:09:08.589 norandommap=0 00:09:08.589 numjobs=1 00:09:08.589 00:09:08.589 verify_dump=1 00:09:08.589 verify_backlog=512 00:09:08.589 verify_state_save=0 00:09:08.589 do_verify=1 00:09:08.589 verify=crc32c-intel 00:09:08.589 [job0] 00:09:08.589 filename=/dev/nvme0n1 00:09:08.589 Could not set queue depth (nvme0n1) 00:09:08.589 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.589 fio-3.35 00:09:08.589 Starting 1 thread 00:09:09.548 21:22:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:09.548 21:22:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:09.807 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:10.065 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:10.321 21:22:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67088 00:09:15.588 00:09:15.588 job0: (groupid=0, jobs=1): err= 0: pid=67109: Mon Jul 15 21:22:48 2024 00:09:15.588 read: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(305MiB/6005msec) 00:09:15.588 slat (usec): min=4, max=4915, avg=42.41, stdev=154.56 00:09:15.588 clat (usec): min=890, max=12747, avg=6815.30, stdev=1281.98 00:09:15.588 lat (usec): min=916, max=12763, avg=6857.70, stdev=1287.66 00:09:15.588 clat percentiles (usec): 00:09:15.588 | 1.00th=[ 4015], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 6063], 00:09:15.588 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:09:15.588 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 8094], 95.00th=[ 9634], 00:09:15.588 | 99.00th=[10814], 99.50th=[11207], 99.90th=[11731], 99.95th=[11863], 00:09:15.588 | 99.99th=[12649] 00:09:15.588 bw ( KiB/s): min=13192, max=31632, per=51.68%, avg=26836.45, stdev=6717.81, samples=11 00:09:15.588 iops : min= 3298, max= 7908, avg=6709.09, stdev=1679.44, samples=11 00:09:15.588 write: IOPS=7410, BW=28.9MiB/s (30.4MB/s)(155MiB/5347msec); 0 zone resets 00:09:15.588 slat (usec): min=11, max=2503, avg=55.32, stdev=97.91 00:09:15.588 clat (usec): min=745, max=12171, avg=5795.55, stdev=1105.13 00:09:15.588 lat (usec): min=817, max=12200, avg=5850.87, stdev=1108.38 00:09:15.588 clat percentiles (usec): 00:09:15.588 | 1.00th=[ 3294], 5.00th=[ 3982], 10.00th=[ 4359], 20.00th=[ 4948], 00:09:15.588 | 30.00th=[ 5342], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 6063], 00:09:15.588 | 70.00th=[ 6259], 80.00th=[ 6521], 90.00th=[ 6915], 95.00th=[ 7242], 00:09:15.588 | 99.00th=[ 9372], 99.50th=[ 9896], 99.90th=[11207], 99.95th=[11600], 00:09:15.588 | 99.99th=[12125] 00:09:15.588 bw ( KiB/s): min=13624, max=31464, per=90.32%, avg=26772.55, stdev=6488.90, samples=11 00:09:15.588 iops : min= 3406, max= 7866, avg=6693.09, stdev=1622.20, samples=11 00:09:15.588 lat (usec) : 750=0.01%, 1000=0.01% 00:09:15.588 lat (msec) : 2=0.13%, 4=2.25%, 10=95.06%, 20=2.55% 00:09:15.588 cpu : usr=7.76%, sys=31.70%, ctx=7311, majf=0, minf=78 00:09:15.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:15.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.588 issued rwts: total=77954,39625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.588 00:09:15.588 Run status group 0 (all jobs): 00:09:15.588 READ: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=305MiB (319MB), run=6005-6005msec 00:09:15.588 WRITE: bw=28.9MiB/s (30.4MB/s), 28.9MiB/s-28.9MiB/s (30.4MB/s-30.4MB/s), io=155MiB (162MB), run=5347-5347msec 00:09:15.589 00:09:15.589 Disk stats (read/write): 00:09:15.589 nvme0n1: ios=76938/38755, merge=0/0, ticks=479993/195315, in_queue=675308, util=98.70% 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67188 00:09:15.589 21:22:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:15.589 [global] 00:09:15.589 thread=1 00:09:15.589 invalidate=1 00:09:15.589 rw=randrw 00:09:15.589 time_based=1 00:09:15.589 runtime=6 00:09:15.589 ioengine=libaio 00:09:15.589 direct=1 00:09:15.589 bs=4096 00:09:15.589 iodepth=128 00:09:15.589 norandommap=0 00:09:15.589 numjobs=1 00:09:15.589 00:09:15.589 verify_dump=1 00:09:15.589 verify_backlog=512 00:09:15.589 verify_state_save=0 00:09:15.589 do_verify=1 00:09:15.589 verify=crc32c-intel 00:09:15.589 [job0] 00:09:15.589 filename=/dev/nvme0n1 00:09:15.589 Could not set queue depth (nvme0n1) 00:09:15.589 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.589 fio-3.35 00:09:15.589 Starting 1 thread 00:09:16.154 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:16.411 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:16.668 21:22:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:16.927 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:17.190 21:22:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67188 00:09:22.450 00:09:22.450 job0: (groupid=0, jobs=1): err= 0: pid=67209: Mon Jul 15 21:22:54 2024 00:09:22.450 read: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(324MiB/6002msec) 00:09:22.450 slat (usec): min=4, max=5015, avg=36.14, stdev=136.45 00:09:22.450 clat (usec): min=248, max=19323, avg=6441.05, stdev=1575.06 00:09:22.450 lat (usec): min=280, max=19332, avg=6477.20, stdev=1585.04 00:09:22.450 clat percentiles (usec): 00:09:22.450 | 1.00th=[ 2573], 5.00th=[ 3949], 10.00th=[ 4490], 20.00th=[ 5276], 00:09:22.450 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6521], 60.00th=[ 6718], 00:09:22.450 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7898], 95.00th=[ 9372], 00:09:22.450 | 99.00th=[11076], 99.50th=[11731], 99.90th=[16188], 99.95th=[17433], 00:09:22.450 | 99.99th=[18220] 00:09:22.450 bw ( KiB/s): min= 9872, max=44384, per=50.36%, avg=27874.55, stdev=10803.16, samples=11 00:09:22.450 iops : min= 2468, max=11096, avg=6968.64, stdev=2700.79, samples=11 00:09:22.450 write: IOPS=8012, BW=31.3MiB/s (32.8MB/s)(164MiB/5232msec); 0 zone resets 00:09:22.450 slat (usec): min=11, max=3333, avg=49.20, stdev=88.09 00:09:22.450 clat (usec): min=239, max=18194, avg=5377.87, stdev=1536.44 00:09:22.450 lat (usec): min=264, max=18221, avg=5427.06, stdev=1546.03 00:09:22.450 clat percentiles (usec): 00:09:22.450 | 1.00th=[ 2114], 5.00th=[ 2966], 10.00th=[ 3425], 20.00th=[ 4047], 00:09:22.450 | 30.00th=[ 4621], 40.00th=[ 5211], 50.00th=[ 5604], 60.00th=[ 5866], 00:09:22.450 | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6783], 95.00th=[ 7308], 00:09:22.450 | 99.00th=[10159], 99.50th=[11207], 99.90th=[15401], 99.95th=[16188], 00:09:22.450 | 99.99th=[17695] 00:09:22.450 bw ( KiB/s): min=10584, max=44776, per=86.98%, avg=27879.45, stdev=10533.57, samples=11 00:09:22.450 iops : min= 2646, max=11194, avg=6969.82, stdev=2633.38, samples=11 00:09:22.450 lat (usec) : 250=0.01%, 500=0.04%, 750=0.08%, 1000=0.12% 00:09:22.450 lat (msec) : 2=0.38%, 4=9.42%, 10=87.53%, 20=2.43% 00:09:22.450 cpu : usr=7.35%, sys=32.11%, ctx=8665, majf=0, minf=110 00:09:22.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:22.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.450 issued rwts: total=83053,41924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.450 00:09:22.451 Run status group 0 (all jobs): 00:09:22.451 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=324MiB (340MB), run=6002-6002msec 00:09:22.451 WRITE: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=164MiB (172MB), run=5232-5232msec 00:09:22.451 00:09:22.451 Disk stats (read/write): 00:09:22.451 nvme0n1: ios=82022/41086, merge=0/0, ticks=480006/188040, in_queue=668046, util=98.63% 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:22.451 21:22:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.451 rmmod nvme_tcp 00:09:22.451 rmmod nvme_fabrics 00:09:22.451 rmmod nvme_keyring 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66993 ']' 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66993 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 66993 ']' 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 66993 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66993 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:22.451 killing process with pid 66993 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66993' 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 66993 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 66993 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:22.451 ************************************ 00:09:22.451 END TEST nvmf_target_multipath 00:09:22.451 ************************************ 00:09:22.451 00:09:22.451 real 0m19.192s 00:09:22.451 user 1m11.154s 00:09:22.451 sys 0m11.538s 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.451 21:22:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:22.451 21:22:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:22.451 21:22:55 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.451 21:22:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.451 21:22:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.451 21:22:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.451 ************************************ 00:09:22.451 START TEST nvmf_zcopy 00:09:22.451 ************************************ 00:09:22.451 21:22:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.710 * Looking for test storage... 00:09:22.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.710 21:22:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:22.711 Cannot find device "nvmf_tgt_br" 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.711 Cannot find device "nvmf_tgt_br2" 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:22.711 Cannot find device "nvmf_tgt_br" 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:22.711 Cannot find device "nvmf_tgt_br2" 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:22.711 21:22:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.711 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:22.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:09:22.970 00:09:22.970 --- 10.0.0.2 ping statistics --- 00:09:22.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.970 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:22.970 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:22.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:09:22.971 00:09:22.971 --- 10.0.0.3 ping statistics --- 00:09:22.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.971 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:22.971 00:09:22.971 --- 10.0.0.1 ping statistics --- 00:09:22.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.971 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67461 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67461 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67461 ']' 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.971 21:22:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:23.230 [2024-07-15 21:22:56.390036] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:23.230 [2024-07-15 21:22:56.390260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.230 [2024-07-15 21:22:56.518990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.488 [2024-07-15 21:22:56.615194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.488 [2024-07-15 21:22:56.615415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.488 [2024-07-15 21:22:56.615431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.488 [2024-07-15 21:22:56.615440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.488 [2024-07-15 21:22:56.615446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.488 [2024-07-15 21:22:56.615474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.488 [2024-07-15 21:22:56.656353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 [2024-07-15 21:22:57.320197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 [2024-07-15 21:22:57.344253] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 malloc0 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.057 { 00:09:24.057 "params": { 00:09:24.057 "name": "Nvme$subsystem", 00:09:24.057 "trtype": "$TEST_TRANSPORT", 00:09:24.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.057 "adrfam": "ipv4", 00:09:24.057 "trsvcid": "$NVMF_PORT", 00:09:24.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.057 "hdgst": ${hdgst:-false}, 00:09:24.057 "ddgst": ${ddgst:-false} 00:09:24.057 }, 00:09:24.057 "method": "bdev_nvme_attach_controller" 00:09:24.057 } 00:09:24.057 EOF 00:09:24.057 )") 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:24.057 21:22:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.057 "params": { 00:09:24.057 "name": "Nvme1", 00:09:24.057 "trtype": "tcp", 00:09:24.057 "traddr": "10.0.0.2", 00:09:24.057 "adrfam": "ipv4", 00:09:24.057 "trsvcid": "4420", 00:09:24.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.057 "hdgst": false, 00:09:24.057 "ddgst": false 00:09:24.057 }, 00:09:24.057 "method": "bdev_nvme_attach_controller" 00:09:24.057 }' 00:09:24.316 [2024-07-15 21:22:57.444246] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:24.316 [2024-07-15 21:22:57.445005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67494 ] 00:09:24.316 [2024-07-15 21:22:57.604132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.575 [2024-07-15 21:22:57.701914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.575 [2024-07-15 21:22:57.752366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.575 Running I/O for 10 seconds... 00:09:34.565 00:09:34.565 Latency(us) 00:09:34.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.565 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:34.565 Verification LBA range: start 0x0 length 0x1000 00:09:34.565 Nvme1n1 : 10.01 7686.79 60.05 0.00 0.00 16606.49 1263.34 22108.53 00:09:34.565 =================================================================================================================== 00:09:34.565 Total : 7686.79 60.05 0.00 0.00 16606.49 1263.34 22108.53 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67610 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:34.824 { 00:09:34.824 "params": { 00:09:34.824 "name": "Nvme$subsystem", 00:09:34.824 "trtype": "$TEST_TRANSPORT", 00:09:34.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.824 "adrfam": "ipv4", 00:09:34.824 "trsvcid": "$NVMF_PORT", 00:09:34.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.824 "hdgst": ${hdgst:-false}, 00:09:34.824 "ddgst": ${ddgst:-false} 00:09:34.824 }, 00:09:34.824 "method": "bdev_nvme_attach_controller" 00:09:34.824 } 00:09:34.824 EOF 00:09:34.824 )") 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:34.824 [2024-07-15 21:23:08.049924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.050093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:34.824 21:23:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:34.824 "params": { 00:09:34.824 "name": "Nvme1", 00:09:34.824 "trtype": "tcp", 00:09:34.824 "traddr": "10.0.0.2", 00:09:34.824 "adrfam": "ipv4", 00:09:34.824 "trsvcid": "4420", 00:09:34.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.824 "hdgst": false, 00:09:34.824 "ddgst": false 00:09:34.824 }, 00:09:34.824 "method": "bdev_nvme_attach_controller" 00:09:34.824 }' 00:09:34.824 [2024-07-15 21:23:08.065903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.065928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.075390] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:34.824 [2024-07-15 21:23:08.075451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67610 ] 00:09:34.824 [2024-07-15 21:23:08.077887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.077905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.089885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.090006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.101892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.101996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.113892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.113990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.129900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.129994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.145891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.145986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.157888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.157980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.169889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.169981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.824 [2024-07-15 21:23:08.181888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.824 [2024-07-15 21:23:08.181982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.197887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.197980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.209890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.209981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.217160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.083 [2024-07-15 21:23:08.225891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.226008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.237889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.237992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.249889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.249986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.265909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.266079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.277898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.277999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.289896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.289992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.295074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.083 [2024-07-15 21:23:08.301894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.302001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.313905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.314059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.329905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.330054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.344360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.083 [2024-07-15 21:23:08.345901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.346005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.361899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.362036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.377898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.377995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.393918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.394036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.409906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.410033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.421909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.422037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.433907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.434036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.083 [2024-07-15 21:23:08.445913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.083 [2024-07-15 21:23:08.446039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 Running I/O for 5 seconds... 00:09:35.352 [2024-07-15 21:23:08.457902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.458016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.477519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.477563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.495072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.495109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.510229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.510265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.529901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.529936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.544741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.544777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.563499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.563541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.580797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.580855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.595641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.595678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.611468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.611500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.625831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.625864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.645343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.645376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.660304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.660337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.679928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.679959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.694846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.694879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.352 [2024-07-15 21:23:08.713982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.352 [2024-07-15 21:23:08.714016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.729033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.729066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.744620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.744652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.759527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.759559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.775012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.775044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.788958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.788990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.803520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.803552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.814320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.814349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.828925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.828954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.842690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.842721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.857023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.857053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.609 [2024-07-15 21:23:08.871377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.609 [2024-07-15 21:23:08.871407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.610 [2024-07-15 21:23:08.882920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.610 [2024-07-15 21:23:08.882950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.610 [2024-07-15 21:23:08.897320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.610 [2024-07-15 21:23:08.897356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.610 [2024-07-15 21:23:08.912111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.610 [2024-07-15 21:23:08.912147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.610 [2024-07-15 21:23:08.931574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.610 [2024-07-15 21:23:08.931616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.610 [2024-07-15 21:23:08.949568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.610 [2024-07-15 21:23:08.949606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.610 [2024-07-15 21:23:08.967120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.610 [2024-07-15 21:23:08.967159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:08.984961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:08.985001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.002663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.002703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.017409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.017446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.028868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.028899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.043416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.043451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.059148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.059182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.072749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.072785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.087569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.087607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.103111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.103149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.120673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.120710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.135521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.135556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.154890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.154927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.172783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.172831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.190211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.190244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.204950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.204987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.220774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.220811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.868 [2024-07-15 21:23:09.235065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.868 [2024-07-15 21:23:09.235101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.252458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.252496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.269995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.270028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.287722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.287761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.303059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.303097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.321747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.321791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.339773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.339812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.357090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.357127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.374587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.374626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.392648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.392685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.410593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.410632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.428527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.428559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.443508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.443548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.459196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.459232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.473803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.473851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.126 [2024-07-15 21:23:09.489500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.126 [2024-07-15 21:23:09.489538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.504201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.504236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.520115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.520148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.534621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.534653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.549100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.549131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.568708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.568741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.583519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.583551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.602907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.602944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.617913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.617949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.637463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.637508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.651966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.652005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.666566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.666599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.685981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.686022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.700870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.700912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.712364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.712403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.727429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.727473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.385 [2024-07-15 21:23:09.743013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.385 [2024-07-15 21:23:09.743045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.643 [2024-07-15 21:23:09.760893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.643 [2024-07-15 21:23:09.760933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.643 [2024-07-15 21:23:09.775862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.643 [2024-07-15 21:23:09.775903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.643 [2024-07-15 21:23:09.791561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.643 [2024-07-15 21:23:09.791602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.643 [2024-07-15 21:23:09.806179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.643 [2024-07-15 21:23:09.806215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.643 [2024-07-15 21:23:09.821985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.643 [2024-07-15 21:23:09.822020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.836329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.836369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.850260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.850295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.866169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.866207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.880627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.880668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.891398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.891433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.906508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.906539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.926066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.926109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.944248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.944293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.959235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.959274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.974494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.974534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:09.989439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:09.989472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.644 [2024-07-15 21:23:10.005652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.644 [2024-07-15 21:23:10.005725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.021022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.021062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.036723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.036781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.052022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.052064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.068061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.068096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.082410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.082450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.097071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.097110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.113635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.113674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.129177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.129217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.143380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.143412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.154212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.154246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.169277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.169314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.184674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.184710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.199499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.199537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.214874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.214909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.229481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.229513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.240210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.240240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.254840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.254874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.902 [2024-07-15 21:23:10.269271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.902 [2024-07-15 21:23:10.269304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.285338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.285370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.299592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.299624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.313973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.314003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.327804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.327847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.342216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.342247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.357945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.357975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.372598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.372628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.388308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.388337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.403015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.403043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.413680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.413708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.432015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.432056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.446851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.446890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.462555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.462597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.476994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.477032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.492303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.492340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.509746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.509786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.161 [2024-07-15 21:23:10.524670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.161 [2024-07-15 21:23:10.524709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.544097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.544138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.561771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.561812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.579527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.579568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.594502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.594543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.610333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.610373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.625887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.625927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.639870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.639902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.654145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.654182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.671612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.671653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.686618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.686656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.701910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.701944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.716602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.716638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.728151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.728186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.746193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.746235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.760936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.760975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.419 [2024-07-15 21:23:10.780405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.419 [2024-07-15 21:23:10.780449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.795451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.795484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.811056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.811092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.828741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.828779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.843949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.843984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.859601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.859637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.873958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.873991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.888065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.888099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.904145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.904180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.915403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.915437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.930049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.930085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.940706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.940743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.955899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.955933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.975323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.975364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:10.990315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.678 [2024-07-15 21:23:10.990353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.678 [2024-07-15 21:23:11.005856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.679 [2024-07-15 21:23:11.005891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.679 [2024-07-15 21:23:11.020719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.679 [2024-07-15 21:23:11.020756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.679 [2024-07-15 21:23:11.035999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.679 [2024-07-15 21:23:11.036031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.050704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.050738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.066671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.066702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.081160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.081190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.097275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.097306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.111529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.111559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.127178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.127206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.141771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.141802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.157153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.157181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.171063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.171094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.187063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.187095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.205906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.205936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.225369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.225403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.240847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.240879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.256728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.256761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.273008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.273040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.289004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.289037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.937 [2024-07-15 21:23:11.303439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.937 [2024-07-15 21:23:11.303473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.314969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.315001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.330248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.330281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.346163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.346197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.364696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.364729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.376002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.376034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.391121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.391154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.407322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.407354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.423668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.423702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.435521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.435555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.451183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.451217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.470357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.470390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.485297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.485332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.495626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.495660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.511378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.511411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.527427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.527461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.541849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.541882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.196 [2024-07-15 21:23:11.557473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.196 [2024-07-15 21:23:11.557506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.573871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.573902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.592975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.593005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.608474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.608508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.624686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.624719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.636273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.636305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.654923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.654956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.666357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.666391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.682179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.682210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.698639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.698672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.715155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.715186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.731157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.731190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.747374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.747407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.761592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.761625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.776978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.777010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.793352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.793385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.455 [2024-07-15 21:23:11.813081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.455 [2024-07-15 21:23:11.813116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.828515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.828548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.844480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.844515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.856216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.856250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.872106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.872138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.888356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.888389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.904540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.904588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.918961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.919001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.929283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.929317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.944837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.944876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.961206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.961238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.981177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.981209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:11.996265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:11.996298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:12.012593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:12.012628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:12.028849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:12.028881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:12.040113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:12.040145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:12.055813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:12.055855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.715 [2024-07-15 21:23:12.071947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.715 [2024-07-15 21:23:12.071979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.973 [2024-07-15 21:23:12.088067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.973 [2024-07-15 21:23:12.088100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.973 [2024-07-15 21:23:12.099459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.973 [2024-07-15 21:23:12.099492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.973 [2024-07-15 21:23:12.115009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.973 [2024-07-15 21:23:12.115058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.973 [2024-07-15 21:23:12.135102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.973 [2024-07-15 21:23:12.135135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.973 [2024-07-15 21:23:12.153550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.973 [2024-07-15 21:23:12.153583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.973 [2024-07-15 21:23:12.165169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.973 [2024-07-15 21:23:12.165201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.973 [2024-07-15 21:23:12.180679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.973 [2024-07-15 21:23:12.180711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.199768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.199806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.218139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.218174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.233059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.233093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.244602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.244634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.259555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.259588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.275847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.275877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.291701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.291734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.307039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.307071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.323296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.323331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.974 [2024-07-15 21:23:12.340226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.974 [2024-07-15 21:23:12.340262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.356071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.356104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.374479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.374514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.385782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.385830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.400924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.400958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.420316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.420350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.435510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.435543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.455179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.455213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.470787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.470829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.486716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.486750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.501030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.501062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.516218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.516251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.536212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.536251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.552538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.552572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.564104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.564136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.579461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.579493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.241 [2024-07-15 21:23:12.595744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.241 [2024-07-15 21:23:12.595777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.511 [2024-07-15 21:23:12.615121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.511 [2024-07-15 21:23:12.615154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.511 [2024-07-15 21:23:12.630651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.511 [2024-07-15 21:23:12.630685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.511 [2024-07-15 21:23:12.647022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.511 [2024-07-15 21:23:12.647054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.511 [2024-07-15 21:23:12.663091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.663122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.677454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.677487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.693149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.693181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.712805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.712847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.726917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.726950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.742310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.742343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.758255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.758288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.772876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.772907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.784526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.784559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.799472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.799506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.815611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.815644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.826950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.826982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.845326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.845360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.860670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.860703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.512 [2024-07-15 21:23:12.876333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.512 [2024-07-15 21:23:12.876366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.892648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.892684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.908965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.908997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.920942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.920973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.936541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.936582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.952384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.952418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.968560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.968609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.983482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.983517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:12.994008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:12.994039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.009100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.009131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.025289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.025323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.041435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.041467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.055755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.055789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.074507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.074540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.090003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.090035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.105846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.105877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.121799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.121845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.770 [2024-07-15 21:23:13.136338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.770 [2024-07-15 21:23:13.136378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.146822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.146877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.162494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.162527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.178131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.178161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.192458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.192488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.203291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.203320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.217800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.217843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.231207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.231236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.246038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.246068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.261441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.261472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.275526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.275555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.289613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.289643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.304074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.031 [2024-07-15 21:23:13.304102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.031 [2024-07-15 21:23:13.319746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.032 [2024-07-15 21:23:13.319776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.032 [2024-07-15 21:23:13.334444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.032 [2024-07-15 21:23:13.334473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.032 [2024-07-15 21:23:13.345000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.032 [2024-07-15 21:23:13.345027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.032 [2024-07-15 21:23:13.359631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.032 [2024-07-15 21:23:13.359660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.032 [2024-07-15 21:23:13.374595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.032 [2024-07-15 21:23:13.374623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.032 [2024-07-15 21:23:13.389448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.032 [2024-07-15 21:23:13.389478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.405064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.405092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.419944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.419971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.435389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.435417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.450245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.450274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 00:09:40.292 Latency(us) 00:09:40.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.292 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:40.292 Nvme1n1 : 5.01 15646.95 122.24 0.00 0.00 8172.41 3342.60 18318.50 00:09:40.292 =================================================================================================================== 00:09:40.292 Total : 15646.95 122.24 0.00 0.00 8172.41 3342.60 18318.50 00:09:40.292 [2024-07-15 21:23:13.461948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.462092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.473926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.474045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.485917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.486061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.497899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.498063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.509899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.509923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.521887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.521911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.533885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.533908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.545888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.545913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.557876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.557898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.569859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.569876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.581849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.581875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.593827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.593848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.605806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.605834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.617788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.617812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.629769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.629786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 [2024-07-15 21:23:13.641748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.292 [2024-07-15 21:23:13.641767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.292 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67610) - No such process 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67610 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.292 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.550 delay0 00:09:40.550 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.550 21:23:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:40.550 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.550 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.550 21:23:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.550 21:23:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:40.550 [2024-07-15 21:23:13.844539] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:47.114 Initializing NVMe Controllers 00:09:47.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:47.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:47.114 Initialization complete. Launching workers. 00:09:47.114 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 65 00:09:47.114 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 352, failed to submit 33 00:09:47.114 success 188, unsuccess 164, failed 0 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.114 rmmod nvme_tcp 00:09:47.114 rmmod nvme_fabrics 00:09:47.114 rmmod nvme_keyring 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67461 ']' 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67461 00:09:47.114 21:23:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67461 ']' 00:09:47.115 21:23:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67461 00:09:47.115 21:23:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:47.115 21:23:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.115 21:23:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67461 00:09:47.115 killing process with pid 67461 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67461' 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67461 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67461 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:47.115 00:09:47.115 real 0m24.604s 00:09:47.115 user 0m39.799s 00:09:47.115 sys 0m7.992s 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.115 21:23:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.115 ************************************ 00:09:47.115 END TEST nvmf_zcopy 00:09:47.115 ************************************ 00:09:47.115 21:23:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:47.115 21:23:20 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:47.115 21:23:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.115 21:23:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.115 21:23:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.115 ************************************ 00:09:47.115 START TEST nvmf_nmic 00:09:47.115 ************************************ 00:09:47.115 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:47.375 * Looking for test storage... 00:09:47.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:47.375 Cannot find device "nvmf_tgt_br" 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.375 Cannot find device "nvmf_tgt_br2" 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:47.375 Cannot find device "nvmf_tgt_br" 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:47.375 Cannot find device "nvmf_tgt_br2" 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:47.375 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:47.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:09:47.635 00:09:47.635 --- 10.0.0.2 ping statistics --- 00:09:47.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.635 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:47.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:09:47.635 00:09:47.635 --- 10.0.0.3 ping statistics --- 00:09:47.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.635 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:47.635 00:09:47.635 --- 10.0.0.1 ping statistics --- 00:09:47.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.635 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67941 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67941 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 67941 ']' 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.635 21:23:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:47.894 [2024-07-15 21:23:21.016032] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:47.894 [2024-07-15 21:23:21.016085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.894 [2024-07-15 21:23:21.157192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.894 [2024-07-15 21:23:21.237119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.894 [2024-07-15 21:23:21.237168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.894 [2024-07-15 21:23:21.237178] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.894 [2024-07-15 21:23:21.237186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.894 [2024-07-15 21:23:21.237193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.894 [2024-07-15 21:23:21.237313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.894 [2024-07-15 21:23:21.238241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.894 [2024-07-15 21:23:21.238380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.894 [2024-07-15 21:23:21.238380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.192 [2024-07-15 21:23:21.279219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 [2024-07-15 21:23:21.914035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 Malloc0 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.760 21:23:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 [2024-07-15 21:23:22.005005] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.760 test case1: single bdev can't be used in multiple subsystems 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.760 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.761 [2024-07-15 21:23:22.040825] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:48.761 [2024-07-15 21:23:22.041006] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:48.761 [2024-07-15 21:23:22.041142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.761 request: 00:09:48.761 { 00:09:48.761 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:48.761 "namespace": { 00:09:48.761 "bdev_name": "Malloc0", 00:09:48.761 "no_auto_visible": false 00:09:48.761 }, 00:09:48.761 "method": "nvmf_subsystem_add_ns", 00:09:48.761 "req_id": 1 00:09:48.761 } 00:09:48.761 Got JSON-RPC error response 00:09:48.761 response: 00:09:48.761 { 00:09:48.761 "code": -32602, 00:09:48.761 "message": "Invalid parameters" 00:09:48.761 } 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:48.761 Adding namespace failed - expected result. 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:48.761 test case2: host connect to nvmf target in multiple paths 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.761 [2024-07-15 21:23:22.060907] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.761 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:49.020 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:49.020 21:23:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.020 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:49.020 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.020 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:49.020 21:23:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:50.998 21:23:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:50.998 21:23:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:50.998 21:23:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.998 21:23:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:50.998 21:23:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.998 21:23:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:50.998 21:23:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:51.256 [global] 00:09:51.256 thread=1 00:09:51.256 invalidate=1 00:09:51.256 rw=write 00:09:51.256 time_based=1 00:09:51.256 runtime=1 00:09:51.256 ioengine=libaio 00:09:51.256 direct=1 00:09:51.256 bs=4096 00:09:51.256 iodepth=1 00:09:51.256 norandommap=0 00:09:51.256 numjobs=1 00:09:51.256 00:09:51.256 verify_dump=1 00:09:51.256 verify_backlog=512 00:09:51.256 verify_state_save=0 00:09:51.256 do_verify=1 00:09:51.256 verify=crc32c-intel 00:09:51.256 [job0] 00:09:51.256 filename=/dev/nvme0n1 00:09:51.256 Could not set queue depth (nvme0n1) 00:09:51.256 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.256 fio-3.35 00:09:51.256 Starting 1 thread 00:09:52.634 00:09:52.634 job0: (groupid=0, jobs=1): err= 0: pid=68027: Mon Jul 15 21:23:25 2024 00:09:52.634 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:09:52.634 slat (nsec): min=7230, max=20981, avg=7793.54, stdev=1029.44 00:09:52.634 clat (usec): min=100, max=612, avg=135.30, stdev=15.80 00:09:52.634 lat (usec): min=110, max=619, avg=143.09, stdev=15.82 00:09:52.634 clat percentiles (usec): 00:09:52.634 | 1.00th=[ 109], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 123], 00:09:52.634 | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:09:52.634 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 157], 00:09:52.634 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 231], 00:09:52.634 | 99.99th=[ 611] 00:09:52.634 write: IOPS=4307, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1001msec); 0 zone resets 00:09:52.634 slat (usec): min=11, max=109, avg=12.98, stdev= 5.16 00:09:52.634 clat (usec): min=32, max=572, avg=81.25, stdev=14.41 00:09:52.634 lat (usec): min=73, max=583, avg=94.23, stdev=16.12 00:09:52.634 clat percentiles (usec): 00:09:52.634 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 73], 00:09:52.634 | 30.00th=[ 76], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 84], 00:09:52.634 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 93], 95.00th=[ 98], 00:09:52.634 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 155], 99.95th=[ 383], 00:09:52.634 | 99.99th=[ 570] 00:09:52.634 bw ( KiB/s): min=16964, max=16964, per=98.45%, avg=16964.00, stdev= 0.00, samples=1 00:09:52.634 iops : min= 4241, max= 4241, avg=4241.00, stdev= 0.00, samples=1 00:09:52.634 lat (usec) : 50=0.01%, 100=49.64%, 250=50.29%, 500=0.04%, 750=0.02% 00:09:52.634 cpu : usr=1.70%, sys=7.20%, ctx=8409, majf=0, minf=2 00:09:52.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.634 issued rwts: total=4096,4312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.634 00:09:52.634 Run status group 0 (all jobs): 00:09:52.634 READ: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:09:52.634 WRITE: bw=16.8MiB/s (17.6MB/s), 16.8MiB/s-16.8MiB/s (17.6MB/s-17.6MB/s), io=16.8MiB (17.7MB), run=1001-1001msec 00:09:52.634 00:09:52.634 Disk stats (read/write): 00:09:52.634 nvme0n1: ios=3634/4010, merge=0/0, ticks=494/349, in_queue=843, util=90.98% 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.634 rmmod nvme_tcp 00:09:52.634 rmmod nvme_fabrics 00:09:52.634 rmmod nvme_keyring 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67941 ']' 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67941 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 67941 ']' 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 67941 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67941 00:09:52.634 killing process with pid 67941 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67941' 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 67941 00:09:52.634 21:23:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 67941 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:52.893 00:09:52.893 real 0m5.748s 00:09:52.893 user 0m17.737s 00:09:52.893 sys 0m2.524s 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.893 21:23:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:52.893 ************************************ 00:09:52.893 END TEST nvmf_nmic 00:09:52.893 ************************************ 00:09:52.893 21:23:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.893 21:23:26 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:52.893 21:23:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.893 21:23:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.893 21:23:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.893 ************************************ 00:09:52.893 START TEST nvmf_fio_target 00:09:52.893 ************************************ 00:09:52.893 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:53.152 * Looking for test storage... 00:09:53.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.152 21:23:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:53.153 Cannot find device "nvmf_tgt_br" 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.153 Cannot find device "nvmf_tgt_br2" 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:53.153 Cannot find device "nvmf_tgt_br" 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:53.153 Cannot find device "nvmf_tgt_br2" 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:53.153 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.412 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:53.412 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:53.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:53.413 00:09:53.413 --- 10.0.0.2 ping statistics --- 00:09:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.413 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:53.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:53.413 00:09:53.413 --- 10.0.0.3 ping statistics --- 00:09:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.413 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:09:53.413 00:09:53.413 --- 10.0.0.1 ping statistics --- 00:09:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.413 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.413 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68205 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68205 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68205 ']' 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.673 21:23:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.673 [2024-07-15 21:23:26.850217] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:09:53.673 [2024-07-15 21:23:26.850278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.673 [2024-07-15 21:23:26.990692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.932 [2024-07-15 21:23:27.067135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.932 [2024-07-15 21:23:27.067185] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.932 [2024-07-15 21:23:27.067194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.932 [2024-07-15 21:23:27.067202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.932 [2024-07-15 21:23:27.067225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.932 [2024-07-15 21:23:27.067416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.932 [2024-07-15 21:23:27.067648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.932 [2024-07-15 21:23:27.068296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.932 [2024-07-15 21:23:27.068296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.932 [2024-07-15 21:23:27.109426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.499 21:23:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.499 21:23:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:54.499 21:23:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:54.499 21:23:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:54.499 21:23:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.499 21:23:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.499 21:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:54.758 [2024-07-15 21:23:27.913323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.758 21:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.018 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:55.018 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.018 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:55.018 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.276 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:55.276 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.535 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:55.535 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:55.794 21:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.052 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:56.052 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.052 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:56.052 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.311 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:56.311 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:56.570 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.829 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:56.829 21:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.829 21:23:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:56.829 21:23:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.088 21:23:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.352 [2024-07-15 21:23:30.482145] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.352 21:23:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:57.352 21:23:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:57.618 21:23:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.877 21:23:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:57.877 21:23:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:57.877 21:23:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.877 21:23:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:57.877 21:23:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:57.877 21:23:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:59.780 21:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:59.780 21:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:59.780 21:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.780 21:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:59.780 21:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.780 21:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:59.780 21:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:59.780 [global] 00:09:59.780 thread=1 00:09:59.780 invalidate=1 00:09:59.780 rw=write 00:09:59.780 time_based=1 00:09:59.780 runtime=1 00:09:59.780 ioengine=libaio 00:09:59.780 direct=1 00:09:59.780 bs=4096 00:09:59.780 iodepth=1 00:09:59.780 norandommap=0 00:09:59.780 numjobs=1 00:09:59.780 00:09:59.780 verify_dump=1 00:09:59.780 verify_backlog=512 00:09:59.780 verify_state_save=0 00:09:59.780 do_verify=1 00:09:59.780 verify=crc32c-intel 00:09:59.780 [job0] 00:09:59.780 filename=/dev/nvme0n1 00:09:59.780 [job1] 00:09:59.780 filename=/dev/nvme0n2 00:09:59.780 [job2] 00:09:59.780 filename=/dev/nvme0n3 00:09:59.780 [job3] 00:09:59.780 filename=/dev/nvme0n4 00:10:00.039 Could not set queue depth (nvme0n1) 00:10:00.039 Could not set queue depth (nvme0n2) 00:10:00.039 Could not set queue depth (nvme0n3) 00:10:00.039 Could not set queue depth (nvme0n4) 00:10:00.039 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.039 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.039 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.039 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.039 fio-3.35 00:10:00.039 Starting 4 threads 00:10:01.417 00:10:01.417 job0: (groupid=0, jobs=1): err= 0: pid=68385: Mon Jul 15 21:23:34 2024 00:10:01.417 read: IOPS=3797, BW=14.8MiB/s (15.6MB/s)(14.8MiB/1001msec) 00:10:01.417 slat (nsec): min=6784, max=22831, avg=7324.35, stdev=917.06 00:10:01.417 clat (usec): min=108, max=1817, avg=135.21, stdev=41.19 00:10:01.417 lat (usec): min=115, max=1824, avg=142.54, stdev=41.20 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:10:01.417 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 137], 00:10:01.417 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:10:01.417 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 510], 99.95th=[ 1811], 00:10:01.417 | 99.99th=[ 1811] 00:10:01.417 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:01.417 slat (usec): min=8, max=145, avg=12.67, stdev= 5.21 00:10:01.417 clat (usec): min=66, max=621, avg=97.54, stdev=13.03 00:10:01.417 lat (usec): min=78, max=633, avg=110.21, stdev=14.69 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 90], 00:10:01.417 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 99], 00:10:01.417 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 116], 00:10:01.417 | 99.00th=[ 128], 99.50th=[ 133], 99.90th=[ 149], 99.95th=[ 163], 00:10:01.417 | 99.99th=[ 619] 00:10:01.417 bw ( KiB/s): min=16384, max=16384, per=26.69%, avg=16384.00, stdev= 0.00, samples=1 00:10:01.417 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:01.417 lat (usec) : 100=34.24%, 250=65.67%, 500=0.03%, 750=0.04% 00:10:01.417 lat (msec) : 2=0.03% 00:10:01.417 cpu : usr=1.80%, sys=6.50%, ctx=7901, majf=0, minf=7 00:10:01.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 issued rwts: total=3801,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.417 job1: (groupid=0, jobs=1): err= 0: pid=68386: Mon Jul 15 21:23:34 2024 00:10:01.417 read: IOPS=3640, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1001msec) 00:10:01.417 slat (nsec): min=7027, max=40123, avg=7807.06, stdev=1298.07 00:10:01.417 clat (usec): min=108, max=5949, avg=139.23, stdev=124.20 00:10:01.417 lat (usec): min=116, max=5956, avg=147.04, stdev=124.36 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 128], 00:10:01.417 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:10:01.417 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:10:01.417 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 791], 99.95th=[ 4047], 00:10:01.417 | 99.99th=[ 5932] 00:10:01.417 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:01.417 slat (usec): min=8, max=117, avg=12.77, stdev= 4.87 00:10:01.417 clat (usec): min=73, max=4174, avg=98.88, stdev=75.97 00:10:01.417 lat (usec): min=84, max=4191, avg=111.65, stdev=76.34 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 80], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:10:01.417 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:10:01.417 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 117], 00:10:01.417 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 145], 99.95th=[ 153], 00:10:01.417 | 99.99th=[ 4178] 00:10:01.417 bw ( KiB/s): min=16384, max=16384, per=26.69%, avg=16384.00, stdev= 0.00, samples=1 00:10:01.417 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:01.417 lat (usec) : 100=35.21%, 250=64.72%, 1000=0.01% 00:10:01.417 lat (msec) : 4=0.03%, 10=0.04% 00:10:01.417 cpu : usr=1.20%, sys=7.00%, ctx=7742, majf=0, minf=9 00:10:01.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 issued rwts: total=3644,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.417 job2: (groupid=0, jobs=1): err= 0: pid=68387: Mon Jul 15 21:23:34 2024 00:10:01.417 read: IOPS=3453, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec) 00:10:01.417 slat (nsec): min=6972, max=33518, avg=7842.66, stdev=1195.56 00:10:01.417 clat (usec): min=119, max=1610, avg=148.35, stdev=27.21 00:10:01.417 lat (usec): min=127, max=1618, avg=156.19, stdev=27.23 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 139], 00:10:01.417 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:10:01.417 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:10:01.417 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 217], 99.95th=[ 223], 00:10:01.417 | 99.99th=[ 1614] 00:10:01.417 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:01.417 slat (usec): min=8, max=135, avg=12.82, stdev= 5.18 00:10:01.417 clat (usec): min=78, max=1429, avg=113.81, stdev=24.96 00:10:01.417 lat (usec): min=89, max=1440, avg=126.63, stdev=25.78 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 92], 5.00th=[ 97], 10.00th=[ 100], 20.00th=[ 103], 00:10:01.417 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 116], 00:10:01.417 | 70.00th=[ 120], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 135], 00:10:01.417 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 161], 99.95th=[ 180], 00:10:01.417 | 99.99th=[ 1434] 00:10:01.417 bw ( KiB/s): min=16024, max=16024, per=26.11%, avg=16024.00, stdev= 0.00, samples=1 00:10:01.417 iops : min= 4006, max= 4006, avg=4006.00, stdev= 0.00, samples=1 00:10:01.417 lat (usec) : 100=5.72%, 250=94.25% 00:10:01.417 lat (msec) : 2=0.03% 00:10:01.417 cpu : usr=1.70%, sys=5.80%, ctx=7041, majf=0, minf=7 00:10:01.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 issued rwts: total=3457,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.417 job3: (groupid=0, jobs=1): err= 0: pid=68388: Mon Jul 15 21:23:34 2024 00:10:01.417 read: IOPS=3368, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:10:01.417 slat (nsec): min=7069, max=21630, avg=7854.47, stdev=949.43 00:10:01.417 clat (usec): min=123, max=535, avg=153.79, stdev=16.14 00:10:01.417 lat (usec): min=131, max=553, avg=161.64, stdev=16.28 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:10:01.417 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:10:01.417 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:10:01.417 | 99.00th=[ 196], 99.50th=[ 245], 99.90th=[ 285], 99.95th=[ 449], 00:10:01.417 | 99.99th=[ 537] 00:10:01.417 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:01.417 slat (usec): min=11, max=199, avg=12.88, stdev= 5.77 00:10:01.417 clat (usec): min=81, max=390, avg=112.37, stdev=13.60 00:10:01.417 lat (usec): min=93, max=402, avg=125.25, stdev=15.62 00:10:01.417 clat percentiles (usec): 00:10:01.417 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 103], 00:10:01.417 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 114], 00:10:01.417 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 127], 95.00th=[ 133], 00:10:01.417 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 262], 99.95th=[ 343], 00:10:01.417 | 99.99th=[ 392] 00:10:01.417 bw ( KiB/s): min=15584, max=15584, per=25.39%, avg=15584.00, stdev= 0.00, samples=1 00:10:01.417 iops : min= 3896, max= 3896, avg=3896.00, stdev= 0.00, samples=1 00:10:01.417 lat (usec) : 100=5.46%, 250=94.26%, 500=0.26%, 750=0.01% 00:10:01.417 cpu : usr=1.20%, sys=6.20%, ctx=6957, majf=0, minf=12 00:10:01.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.417 issued rwts: total=3372,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.417 00:10:01.417 Run status group 0 (all jobs): 00:10:01.418 READ: bw=55.7MiB/s (58.4MB/s), 13.2MiB/s-14.8MiB/s (13.8MB/s-15.6MB/s), io=55.8MiB (58.5MB), run=1001-1001msec 00:10:01.418 WRITE: bw=59.9MiB/s (62.9MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=60.0MiB (62.9MB), run=1001-1001msec 00:10:01.418 00:10:01.418 Disk stats (read/write): 00:10:01.418 nvme0n1: ios=3320/3584, merge=0/0, ticks=465/368, in_queue=833, util=88.78% 00:10:01.418 nvme0n2: ios=3158/3584, merge=0/0, ticks=466/366, in_queue=832, util=88.99% 00:10:01.418 nvme0n3: ios=3021/3072, merge=0/0, ticks=451/369, in_queue=820, util=89.21% 00:10:01.418 nvme0n4: ios=2942/3072, merge=0/0, ticks=455/355, in_queue=810, util=89.77% 00:10:01.418 21:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:01.418 [global] 00:10:01.418 thread=1 00:10:01.418 invalidate=1 00:10:01.418 rw=randwrite 00:10:01.418 time_based=1 00:10:01.418 runtime=1 00:10:01.418 ioengine=libaio 00:10:01.418 direct=1 00:10:01.418 bs=4096 00:10:01.418 iodepth=1 00:10:01.418 norandommap=0 00:10:01.418 numjobs=1 00:10:01.418 00:10:01.418 verify_dump=1 00:10:01.418 verify_backlog=512 00:10:01.418 verify_state_save=0 00:10:01.418 do_verify=1 00:10:01.418 verify=crc32c-intel 00:10:01.418 [job0] 00:10:01.418 filename=/dev/nvme0n1 00:10:01.418 [job1] 00:10:01.418 filename=/dev/nvme0n2 00:10:01.418 [job2] 00:10:01.418 filename=/dev/nvme0n3 00:10:01.418 [job3] 00:10:01.418 filename=/dev/nvme0n4 00:10:01.418 Could not set queue depth (nvme0n1) 00:10:01.418 Could not set queue depth (nvme0n2) 00:10:01.418 Could not set queue depth (nvme0n3) 00:10:01.418 Could not set queue depth (nvme0n4) 00:10:01.418 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.418 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.418 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.418 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.418 fio-3.35 00:10:01.418 Starting 4 threads 00:10:02.841 00:10:02.841 job0: (groupid=0, jobs=1): err= 0: pid=68441: Mon Jul 15 21:23:35 2024 00:10:02.841 read: IOPS=3801, BW=14.8MiB/s (15.6MB/s)(14.8MiB/1000msec) 00:10:02.841 slat (nsec): min=6937, max=75889, avg=8068.17, stdev=3344.72 00:10:02.841 clat (usec): min=110, max=1221, avg=133.34, stdev=22.22 00:10:02.841 lat (usec): min=118, max=1229, avg=141.41, stdev=22.90 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 126], 00:10:02.841 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:10:02.841 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 151], 00:10:02.841 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 310], 99.95th=[ 586], 00:10:02.841 | 99.99th=[ 1221] 00:10:02.841 write: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec); 0 zone resets 00:10:02.841 slat (usec): min=8, max=126, avg=13.79, stdev= 7.02 00:10:02.841 clat (usec): min=70, max=5715, avg=97.27, stdev=106.35 00:10:02.841 lat (usec): min=82, max=5730, avg=111.06, stdev=106.86 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 78], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:10:02.841 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 96], 00:10:02.841 | 70.00th=[ 98], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 113], 00:10:02.841 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 318], 99.95th=[ 717], 00:10:02.841 | 99.99th=[ 5735] 00:10:02.841 bw ( KiB/s): min=16384, max=16384, per=31.38%, avg=16384.00, stdev= 0.00, samples=1 00:10:02.841 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:02.841 lat (usec) : 100=39.48%, 250=60.38%, 500=0.06%, 750=0.04% 00:10:02.841 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:10:02.841 cpu : usr=2.10%, sys=7.00%, ctx=7899, majf=0, minf=9 00:10:02.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 issued rwts: total=3801,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.841 job1: (groupid=0, jobs=1): err= 0: pid=68442: Mon Jul 15 21:23:35 2024 00:10:02.841 read: IOPS=2297, BW=9191KiB/s (9411kB/s)(9200KiB/1001msec) 00:10:02.841 slat (nsec): min=5668, max=44024, avg=7885.27, stdev=2698.73 00:10:02.841 clat (usec): min=182, max=1888, avg=219.03, stdev=39.00 00:10:02.841 lat (usec): min=189, max=1894, avg=226.91, stdev=39.44 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:10:02.841 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 221], 00:10:02.841 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 245], 00:10:02.841 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 529], 99.95th=[ 537], 00:10:02.841 | 99.99th=[ 1893] 00:10:02.841 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:02.841 slat (nsec): min=7119, max=99472, avg=11268.61, stdev=5301.23 00:10:02.841 clat (usec): min=92, max=414, avg=173.94, stdev=18.71 00:10:02.841 lat (usec): min=120, max=424, avg=185.21, stdev=19.78 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:02.841 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:10:02.841 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 198], 00:10:02.841 | 99.00th=[ 233], 99.50th=[ 273], 99.90th=[ 383], 99.95th=[ 408], 00:10:02.841 | 99.99th=[ 416] 00:10:02.841 bw ( KiB/s): min=11184, max=11184, per=21.42%, avg=11184.00, stdev= 0.00, samples=1 00:10:02.841 iops : min= 2796, max= 2796, avg=2796.00, stdev= 0.00, samples=1 00:10:02.841 lat (usec) : 100=0.10%, 250=97.82%, 500=2.02%, 750=0.04% 00:10:02.841 lat (msec) : 2=0.02% 00:10:02.841 cpu : usr=1.40%, sys=3.90%, ctx=4863, majf=0, minf=15 00:10:02.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.841 job2: (groupid=0, jobs=1): err= 0: pid=68443: Mon Jul 15 21:23:35 2024 00:10:02.841 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:02.841 slat (nsec): min=7180, max=24841, avg=7798.90, stdev=992.70 00:10:02.841 clat (usec): min=114, max=456, avg=145.40, stdev=11.42 00:10:02.841 lat (usec): min=122, max=464, avg=153.20, stdev=11.47 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:10:02.841 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:10:02.841 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:10:02.841 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 219], 99.95th=[ 302], 00:10:02.841 | 99.99th=[ 457] 00:10:02.841 write: IOPS=3845, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:10:02.841 slat (usec): min=11, max=137, avg=12.76, stdev= 5.26 00:10:02.841 clat (usec): min=69, max=179, avg=102.75, stdev= 9.15 00:10:02.841 lat (usec): min=81, max=282, avg=115.51, stdev=11.37 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:10:02.841 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 104], 00:10:02.841 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 115], 95.00th=[ 119], 00:10:02.841 | 99.00th=[ 130], 99.50th=[ 135], 99.90th=[ 155], 99.95th=[ 163], 00:10:02.841 | 99.99th=[ 180] 00:10:02.841 bw ( KiB/s): min=16384, max=16384, per=31.38%, avg=16384.00, stdev= 0.00, samples=1 00:10:02.841 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:02.841 lat (usec) : 100=21.81%, 250=78.15%, 500=0.04% 00:10:02.841 cpu : usr=1.40%, sys=6.40%, ctx=7438, majf=0, minf=13 00:10:02.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 issued rwts: total=3584,3849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.841 job3: (groupid=0, jobs=1): err= 0: pid=68444: Mon Jul 15 21:23:35 2024 00:10:02.841 read: IOPS=2298, BW=9195KiB/s (9415kB/s)(9204KiB/1001msec) 00:10:02.841 slat (nsec): min=5681, max=66394, avg=7228.83, stdev=3659.87 00:10:02.841 clat (usec): min=115, max=1932, avg=219.75, stdev=39.64 00:10:02.841 lat (usec): min=128, max=1940, avg=226.98, stdev=40.27 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:02.841 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:10:02.841 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 245], 00:10:02.841 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 498], 99.95th=[ 529], 00:10:02.841 | 99.99th=[ 1926] 00:10:02.841 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:02.841 slat (nsec): min=7201, max=92714, avg=12748.34, stdev=5452.90 00:10:02.841 clat (usec): min=94, max=436, avg=172.31, stdev=18.23 00:10:02.841 lat (usec): min=119, max=448, avg=185.06, stdev=19.22 00:10:02.841 clat percentiles (usec): 00:10:02.841 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:10:02.841 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:02.841 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:10:02.841 | 99.00th=[ 229], 99.50th=[ 265], 99.90th=[ 363], 99.95th=[ 416], 00:10:02.841 | 99.99th=[ 437] 00:10:02.841 bw ( KiB/s): min=11190, max=11190, per=21.43%, avg=11190.00, stdev= 0.00, samples=1 00:10:02.841 iops : min= 2797, max= 2797, avg=2797.00, stdev= 0.00, samples=1 00:10:02.841 lat (usec) : 100=0.04%, 250=98.05%, 500=1.87%, 750=0.02% 00:10:02.841 lat (msec) : 2=0.02% 00:10:02.841 cpu : usr=1.10%, sys=4.40%, ctx=4862, majf=0, minf=10 00:10:02.841 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.841 issued rwts: total=2301,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.841 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.841 00:10:02.841 Run status group 0 (all jobs): 00:10:02.841 READ: bw=46.8MiB/s (49.0MB/s), 9191KiB/s-14.8MiB/s (9411kB/s-15.6MB/s), io=46.8MiB (49.1MB), run=1000-1001msec 00:10:02.841 WRITE: bw=51.0MiB/s (53.5MB/s), 9.99MiB/s-16.0MiB/s (10.5MB/s-16.8MB/s), io=51.0MiB (53.5MB), run=1000-1001msec 00:10:02.841 00:10:02.841 Disk stats (read/write): 00:10:02.841 nvme0n1: ios=3317/3584, merge=0/0, ticks=453/362, in_queue=815, util=88.77% 00:10:02.841 nvme0n2: ios=2097/2170, merge=0/0, ticks=476/352, in_queue=828, util=89.61% 00:10:02.841 nvme0n3: ios=3093/3417, merge=0/0, ticks=473/367, in_queue=840, util=89.75% 00:10:02.842 nvme0n4: ios=2048/2169, merge=0/0, ticks=436/378, in_queue=814, util=89.80% 00:10:02.842 21:23:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:02.842 [global] 00:10:02.842 thread=1 00:10:02.842 invalidate=1 00:10:02.842 rw=write 00:10:02.842 time_based=1 00:10:02.842 runtime=1 00:10:02.842 ioengine=libaio 00:10:02.842 direct=1 00:10:02.842 bs=4096 00:10:02.842 iodepth=128 00:10:02.842 norandommap=0 00:10:02.842 numjobs=1 00:10:02.842 00:10:02.842 verify_dump=1 00:10:02.842 verify_backlog=512 00:10:02.842 verify_state_save=0 00:10:02.842 do_verify=1 00:10:02.842 verify=crc32c-intel 00:10:02.842 [job0] 00:10:02.842 filename=/dev/nvme0n1 00:10:02.842 [job1] 00:10:02.842 filename=/dev/nvme0n2 00:10:02.842 [job2] 00:10:02.842 filename=/dev/nvme0n3 00:10:02.842 [job3] 00:10:02.842 filename=/dev/nvme0n4 00:10:02.842 Could not set queue depth (nvme0n1) 00:10:02.842 Could not set queue depth (nvme0n2) 00:10:02.842 Could not set queue depth (nvme0n3) 00:10:02.842 Could not set queue depth (nvme0n4) 00:10:02.842 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.842 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.842 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.842 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.842 fio-3.35 00:10:02.842 Starting 4 threads 00:10:04.215 00:10:04.215 job0: (groupid=0, jobs=1): err= 0: pid=68508: Mon Jul 15 21:23:37 2024 00:10:04.215 read: IOPS=5923, BW=23.1MiB/s (24.3MB/s)(23.3MiB/1005msec) 00:10:04.215 slat (usec): min=16, max=3651, avg=79.64, stdev=284.94 00:10:04.215 clat (usec): min=1040, max=23532, avg=10913.64, stdev=2338.17 00:10:04.215 lat (usec): min=4422, max=23561, avg=10993.28, stdev=2333.06 00:10:04.215 clat percentiles (usec): 00:10:04.215 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:10:04.215 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:10:04.215 | 70.00th=[10421], 80.00th=[10683], 90.00th=[12780], 95.00th=[17171], 00:10:04.215 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:10:04.215 | 99.99th=[23462] 00:10:04.215 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:10:04.215 slat (usec): min=23, max=5449, avg=75.39, stdev=243.63 00:10:04.215 clat (usec): min=7703, max=15779, avg=10105.39, stdev=1292.06 00:10:04.215 lat (usec): min=8058, max=19050, avg=10180.78, stdev=1284.32 00:10:04.215 clat percentiles (usec): 00:10:04.215 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9503], 00:10:04.215 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:10:04.215 | 70.00th=[10028], 80.00th=[10290], 90.00th=[11076], 95.00th=[13698], 00:10:04.215 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15795], 99.95th=[15795], 00:10:04.215 | 99.99th=[15795] 00:10:04.215 bw ( KiB/s): min=22898, max=26208, per=33.50%, avg=24553.00, stdev=2340.52, samples=2 00:10:04.215 iops : min= 5724, max= 6552, avg=6138.00, stdev=585.48, samples=2 00:10:04.215 lat (msec) : 2=0.01%, 10=47.81%, 20=51.18%, 50=1.01% 00:10:04.215 cpu : usr=6.37%, sys=23.61%, ctx=586, majf=0, minf=5 00:10:04.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:04.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.215 issued rwts: total=5953,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.215 job1: (groupid=0, jobs=1): err= 0: pid=68509: Mon Jul 15 21:23:37 2024 00:10:04.216 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:04.216 slat (usec): min=15, max=6654, avg=150.82, stdev=562.08 00:10:04.216 clat (usec): min=8491, max=32432, avg=19547.11, stdev=5221.79 00:10:04.216 lat (usec): min=8536, max=33388, avg=19697.93, stdev=5262.21 00:10:04.216 clat percentiles (usec): 00:10:04.216 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[14222], 00:10:04.216 | 30.00th=[19268], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:10:04.216 | 70.00th=[22152], 80.00th=[22676], 90.00th=[24249], 95.00th=[26870], 00:10:04.216 | 99.00th=[28443], 99.50th=[30278], 99.90th=[31851], 99.95th=[32375], 00:10:04.216 | 99.99th=[32375] 00:10:04.216 write: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(13.9MiB/1004msec); 0 zone resets 00:10:04.216 slat (usec): min=9, max=6726, avg=140.03, stdev=568.62 00:10:04.216 clat (usec): min=688, max=32021, avg=18547.19, stdev=5302.19 00:10:04.216 lat (usec): min=3263, max=32067, avg=18687.22, stdev=5322.24 00:10:04.216 clat percentiles (usec): 00:10:04.216 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[14615], 00:10:04.216 | 30.00th=[16319], 40.00th=[17957], 50.00th=[20055], 60.00th=[20579], 00:10:04.216 | 70.00th=[20841], 80.00th=[21627], 90.00th=[23462], 95.00th=[27657], 00:10:04.216 | 99.00th=[31851], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:10:04.216 | 99.99th=[32113] 00:10:04.216 bw ( KiB/s): min=12312, max=15177, per=18.75%, avg=13744.50, stdev=2025.86, samples=2 00:10:04.216 iops : min= 3078, max= 3794, avg=3436.00, stdev=506.29, samples=2 00:10:04.216 lat (usec) : 750=0.02% 00:10:04.216 lat (msec) : 4=0.24%, 10=10.50%, 20=31.17%, 50=58.07% 00:10:04.216 cpu : usr=3.39%, sys=14.46%, ctx=714, majf=0, minf=19 00:10:04.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:04.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.216 issued rwts: total=3072,3565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.216 job2: (groupid=0, jobs=1): err= 0: pid=68510: Mon Jul 15 21:23:37 2024 00:10:04.216 read: IOPS=5438, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1004msec) 00:10:04.216 slat (usec): min=7, max=5392, avg=85.68, stdev=349.85 00:10:04.216 clat (usec): min=3318, max=25434, avg=11783.54, stdev=2302.37 00:10:04.216 lat (usec): min=3336, max=25452, avg=11869.22, stdev=2297.92 00:10:04.216 clat percentiles (usec): 00:10:04.216 | 1.00th=[ 9110], 5.00th=[10552], 10.00th=[10814], 20.00th=[10945], 00:10:04.216 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:04.216 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12518], 95.00th=[17695], 00:10:04.216 | 99.00th=[21365], 99.50th=[22676], 99.90th=[25297], 99.95th=[25560], 00:10:04.216 | 99.99th=[25560] 00:10:04.216 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:04.216 slat (usec): min=9, max=6353, avg=83.31, stdev=290.78 00:10:04.216 clat (usec): min=8465, max=17845, avg=11103.51, stdev=1182.93 00:10:04.216 lat (usec): min=8516, max=19636, avg=11186.81, stdev=1177.38 00:10:04.216 clat percentiles (usec): 00:10:04.216 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10421], 00:10:04.216 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:04.216 | 70.00th=[11076], 80.00th=[11338], 90.00th=[12387], 95.00th=[13698], 00:10:04.216 | 99.00th=[15533], 99.50th=[15664], 99.90th=[17171], 99.95th=[17171], 00:10:04.216 | 99.99th=[17957] 00:10:04.216 bw ( KiB/s): min=20439, max=24625, per=30.75%, avg=22532.00, stdev=2959.95, samples=2 00:10:04.216 iops : min= 5109, max= 6156, avg=5632.50, stdev=740.34, samples=2 00:10:04.216 lat (msec) : 4=0.03%, 10=3.71%, 20=95.30%, 50=0.96% 00:10:04.216 cpu : usr=6.68%, sys=21.83%, ctx=539, majf=0, minf=13 00:10:04.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:04.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.216 issued rwts: total=5460,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.216 job3: (groupid=0, jobs=1): err= 0: pid=68511: Mon Jul 15 21:23:37 2024 00:10:04.216 read: IOPS=2819, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1002msec) 00:10:04.216 slat (usec): min=7, max=7459, avg=168.46, stdev=652.44 00:10:04.216 clat (usec): min=785, max=34728, avg=21300.84, stdev=3993.49 00:10:04.216 lat (usec): min=805, max=35627, avg=21469.30, stdev=3993.08 00:10:04.216 clat percentiles (usec): 00:10:04.216 | 1.00th=[ 4948], 5.00th=[15664], 10.00th=[17433], 20.00th=[19268], 00:10:04.216 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:10:04.216 | 70.00th=[22414], 80.00th=[22938], 90.00th=[25822], 95.00th=[27395], 00:10:04.216 | 99.00th=[30802], 99.50th=[32375], 99.90th=[34866], 99.95th=[34866], 00:10:04.216 | 99.99th=[34866] 00:10:04.216 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:04.216 slat (usec): min=9, max=7280, avg=159.31, stdev=597.43 00:10:04.216 clat (usec): min=12047, max=39131, avg=21494.62, stdev=4812.12 00:10:04.216 lat (usec): min=12081, max=39168, avg=21653.92, stdev=4830.28 00:10:04.216 clat percentiles (usec): 00:10:04.216 | 1.00th=[14484], 5.00th=[15533], 10.00th=[15795], 20.00th=[17695], 00:10:04.216 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[21103], 00:10:04.216 | 70.00th=[21890], 80.00th=[24249], 90.00th=[27657], 95.00th=[31589], 00:10:04.216 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:10:04.216 | 99.99th=[39060] 00:10:04.216 bw ( KiB/s): min=12288, max=12288, per=16.77%, avg=12288.00, stdev= 0.00, samples=1 00:10:04.216 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:04.216 lat (usec) : 1000=0.15% 00:10:04.216 lat (msec) : 2=0.02%, 10=1.07%, 20=25.50%, 50=73.26% 00:10:04.216 cpu : usr=3.90%, sys=12.39%, ctx=711, majf=0, minf=13 00:10:04.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:04.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.216 issued rwts: total=2825,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.216 00:10:04.216 Run status group 0 (all jobs): 00:10:04.216 READ: bw=67.3MiB/s (70.5MB/s), 11.0MiB/s-23.1MiB/s (11.5MB/s-24.3MB/s), io=67.6MiB (70.9MB), run=1002-1005msec 00:10:04.216 WRITE: bw=71.6MiB/s (75.0MB/s), 12.0MiB/s-23.9MiB/s (12.6MB/s-25.0MB/s), io=71.9MiB (75.4MB), run=1002-1005msec 00:10:04.216 00:10:04.216 Disk stats (read/write): 00:10:04.216 nvme0n1: ios=5202/5632, merge=0/0, ticks=12102/10863, in_queue=22965, util=89.27% 00:10:04.216 nvme0n2: ios=2642/3072, merge=0/0, ticks=15665/15946, in_queue=31611, util=87.69% 00:10:04.216 nvme0n3: ios=4956/5120, merge=0/0, ticks=11640/10509, in_queue=22149, util=89.75% 00:10:04.216 nvme0n4: ios=2560/2655, merge=0/0, ticks=16875/14370, in_queue=31245, util=88.68% 00:10:04.216 21:23:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:04.216 [global] 00:10:04.216 thread=1 00:10:04.216 invalidate=1 00:10:04.216 rw=randwrite 00:10:04.216 time_based=1 00:10:04.216 runtime=1 00:10:04.216 ioengine=libaio 00:10:04.216 direct=1 00:10:04.216 bs=4096 00:10:04.216 iodepth=128 00:10:04.216 norandommap=0 00:10:04.216 numjobs=1 00:10:04.216 00:10:04.216 verify_dump=1 00:10:04.216 verify_backlog=512 00:10:04.216 verify_state_save=0 00:10:04.216 do_verify=1 00:10:04.216 verify=crc32c-intel 00:10:04.216 [job0] 00:10:04.216 filename=/dev/nvme0n1 00:10:04.216 [job1] 00:10:04.216 filename=/dev/nvme0n2 00:10:04.216 [job2] 00:10:04.216 filename=/dev/nvme0n3 00:10:04.216 [job3] 00:10:04.216 filename=/dev/nvme0n4 00:10:04.216 Could not set queue depth (nvme0n1) 00:10:04.216 Could not set queue depth (nvme0n2) 00:10:04.216 Could not set queue depth (nvme0n3) 00:10:04.216 Could not set queue depth (nvme0n4) 00:10:04.216 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.216 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.216 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.216 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.216 fio-3.35 00:10:04.216 Starting 4 threads 00:10:05.592 00:10:05.592 job0: (groupid=0, jobs=1): err= 0: pid=68565: Mon Jul 15 21:23:38 2024 00:10:05.592 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:10:05.592 slat (usec): min=9, max=2841, avg=83.08, stdev=291.13 00:10:05.592 clat (usec): min=8560, max=16020, avg=11420.39, stdev=721.94 00:10:05.592 lat (usec): min=8593, max=17123, avg=11503.46, stdev=759.48 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[10814], 20.00th=[10945], 00:10:05.592 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:10:05.592 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12518], 95.00th=[12780], 00:10:05.592 | 99.00th=[13566], 99.50th=[14484], 99.90th=[15926], 99.95th=[16057], 00:10:05.592 | 99.99th=[16057] 00:10:05.592 write: IOPS=5759, BW=22.5MiB/s (23.6MB/s)(22.5MiB/1001msec); 0 zone resets 00:10:05.592 slat (usec): min=20, max=5386, avg=80.86, stdev=287.94 00:10:05.592 clat (usec): min=136, max=14406, avg=10781.64, stdev=1139.97 00:10:05.592 lat (usec): min=1828, max=16246, avg=10862.50, stdev=1170.55 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[ 5997], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:10:05.592 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:05.592 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[12518], 00:10:05.592 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14353], 99.95th=[14353], 00:10:05.592 | 99.99th=[14353] 00:10:05.592 bw ( KiB/s): min=24208, max=24208, per=28.05%, avg=24208.00, stdev= 0.00, samples=1 00:10:05.592 iops : min= 6052, max= 6052, avg=6052.00, stdev= 0.00, samples=1 00:10:05.592 lat (usec) : 250=0.01% 00:10:05.592 lat (msec) : 2=0.04%, 4=0.32%, 10=5.61%, 20=94.02% 00:10:05.592 cpu : usr=6.40%, sys=23.60%, ctx=468, majf=0, minf=13 00:10:05.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.592 issued rwts: total=5632,5765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.592 job1: (groupid=0, jobs=1): err= 0: pid=68566: Mon Jul 15 21:23:38 2024 00:10:05.592 read: IOPS=5500, BW=21.5MiB/s (22.5MB/s)(21.5MiB/1001msec) 00:10:05.592 slat (usec): min=17, max=5329, avg=85.68, stdev=337.78 00:10:05.592 clat (usec): min=275, max=15006, avg=11589.07, stdev=1047.06 00:10:05.592 lat (usec): min=660, max=15026, avg=11674.75, stdev=994.69 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[ 5932], 5.00th=[10421], 10.00th=[11207], 20.00th=[11338], 00:10:05.592 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:10:05.592 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12125], 95.00th=[12387], 00:10:05.592 | 99.00th=[14091], 99.50th=[15008], 99.90th=[15008], 99.95th=[15008], 00:10:05.592 | 99.99th=[15008] 00:10:05.592 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:10:05.592 slat (usec): min=12, max=3532, avg=83.08, stdev=284.29 00:10:05.592 clat (usec): min=8573, max=13745, avg=11132.63, stdev=444.33 00:10:05.592 lat (usec): min=9333, max=13774, avg=11215.71, stdev=367.82 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:10:05.592 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11076], 60.00th=[11207], 00:10:05.592 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:10:05.592 | 99.00th=[12387], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:10:05.592 | 99.99th=[13698] 00:10:05.592 bw ( KiB/s): min=22000, max=23102, per=26.13%, avg=22551.00, stdev=779.23, samples=2 00:10:05.592 iops : min= 5500, max= 5775, avg=5637.50, stdev=194.45, samples=2 00:10:05.592 lat (usec) : 500=0.01%, 750=0.01% 00:10:05.592 lat (msec) : 4=0.29%, 10=2.80%, 20=96.89% 00:10:05.592 cpu : usr=6.30%, sys=21.10%, ctx=455, majf=0, minf=13 00:10:05.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.592 issued rwts: total=5506,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.592 job2: (groupid=0, jobs=1): err= 0: pid=68567: Mon Jul 15 21:23:38 2024 00:10:05.592 read: IOPS=4596, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:05.592 slat (usec): min=7, max=3927, avg=97.97, stdev=361.53 00:10:05.592 clat (usec): min=2078, max=16991, avg=13239.94, stdev=915.50 00:10:05.592 lat (usec): min=2593, max=17039, avg=13337.92, stdev=958.79 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[10421], 5.00th=[11731], 10.00th=[12518], 20.00th=[12780], 00:10:05.592 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:10:05.592 | 70.00th=[13435], 80.00th=[13566], 90.00th=[14484], 95.00th=[15008], 00:10:05.592 | 99.00th=[15401], 99.50th=[15795], 99.90th=[16319], 99.95th=[16581], 00:10:05.592 | 99.99th=[16909] 00:10:05.592 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:05.592 slat (usec): min=14, max=6313, avg=95.87, stdev=360.78 00:10:05.592 clat (usec): min=2626, max=18513, avg=12806.00, stdev=1220.64 00:10:05.592 lat (usec): min=2658, max=18565, avg=12901.87, stdev=1264.75 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[ 7570], 5.00th=[11600], 10.00th=[12125], 20.00th=[12387], 00:10:05.592 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:10:05.592 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13698], 95.00th=[15139], 00:10:05.592 | 99.00th=[15795], 99.50th=[15926], 99.90th=[16319], 99.95th=[17957], 00:10:05.592 | 99.99th=[18482] 00:10:05.592 bw ( KiB/s): min=20480, max=20480, per=23.73%, avg=20480.00, stdev= 0.00, samples=1 00:10:05.592 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:05.592 lat (msec) : 4=0.17%, 10=0.83%, 20=98.99% 00:10:05.592 cpu : usr=5.59%, sys=20.46%, ctx=462, majf=0, minf=15 00:10:05.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.592 issued rwts: total=4610,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.592 job3: (groupid=0, jobs=1): err= 0: pid=68568: Mon Jul 15 21:23:38 2024 00:10:05.592 read: IOPS=4855, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1002msec) 00:10:05.592 slat (usec): min=5, max=7303, avg=96.27, stdev=429.30 00:10:05.592 clat (usec): min=667, max=17951, avg=12944.27, stdev=1393.54 00:10:05.592 lat (usec): min=2582, max=18332, avg=13040.54, stdev=1332.55 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[ 6194], 5.00th=[11469], 10.00th=[12518], 20.00th=[12649], 00:10:05.592 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:10:05.592 | 70.00th=[13304], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:10:05.592 | 99.00th=[17957], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:10:05.592 | 99.99th=[17957] 00:10:05.592 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:05.592 slat (usec): min=6, max=5006, avg=93.52, stdev=352.96 00:10:05.592 clat (usec): min=8241, max=15003, avg=12426.94, stdev=661.13 00:10:05.592 lat (usec): min=9387, max=15018, avg=12520.46, stdev=567.70 00:10:05.592 clat percentiles (usec): 00:10:05.592 | 1.00th=[10159], 5.00th=[11469], 10.00th=[11994], 20.00th=[12125], 00:10:05.592 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:10:05.592 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13173], 00:10:05.592 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15008], 99.95th=[15008], 00:10:05.592 | 99.99th=[15008] 00:10:05.592 bw ( KiB/s): min=20480, max=20480, per=23.73%, avg=20480.00, stdev= 0.00, samples=1 00:10:05.592 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:05.592 lat (usec) : 750=0.01% 00:10:05.592 lat (msec) : 4=0.32%, 10=0.94%, 20=98.73% 00:10:05.592 cpu : usr=5.39%, sys=18.58%, ctx=333, majf=0, minf=11 00:10:05.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:05.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.592 issued rwts: total=4865,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.592 00:10:05.592 Run status group 0 (all jobs): 00:10:05.592 READ: bw=80.3MiB/s (84.2MB/s), 18.0MiB/s-22.0MiB/s (18.8MB/s-23.0MB/s), io=80.5MiB (84.4MB), run=1001-1003msec 00:10:05.592 WRITE: bw=84.3MiB/s (88.4MB/s), 19.9MiB/s-22.5MiB/s (20.9MB/s-23.6MB/s), io=84.5MiB (88.6MB), run=1001-1003msec 00:10:05.592 00:10:05.592 Disk stats (read/write): 00:10:05.592 nvme0n1: ios=4764/5120, merge=0/0, ticks=16661/14021, in_queue=30682, util=88.26% 00:10:05.592 nvme0n2: ios=4657/5057, merge=0/0, ticks=11480/11026, in_queue=22506, util=88.59% 00:10:05.592 nvme0n3: ios=4113/4310, merge=0/0, ticks=16872/14256, in_queue=31128, util=89.41% 00:10:05.592 nvme0n4: ios=4113/4544, merge=0/0, ticks=11629/11183, in_queue=22812, util=89.67% 00:10:05.592 21:23:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:05.592 21:23:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68581 00:10:05.592 21:23:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:05.592 21:23:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:05.592 [global] 00:10:05.592 thread=1 00:10:05.592 invalidate=1 00:10:05.592 rw=read 00:10:05.592 time_based=1 00:10:05.592 runtime=10 00:10:05.592 ioengine=libaio 00:10:05.592 direct=1 00:10:05.592 bs=4096 00:10:05.592 iodepth=1 00:10:05.592 norandommap=1 00:10:05.592 numjobs=1 00:10:05.592 00:10:05.592 [job0] 00:10:05.592 filename=/dev/nvme0n1 00:10:05.592 [job1] 00:10:05.592 filename=/dev/nvme0n2 00:10:05.592 [job2] 00:10:05.592 filename=/dev/nvme0n3 00:10:05.592 [job3] 00:10:05.592 filename=/dev/nvme0n4 00:10:05.592 Could not set queue depth (nvme0n1) 00:10:05.592 Could not set queue depth (nvme0n2) 00:10:05.592 Could not set queue depth (nvme0n3) 00:10:05.593 Could not set queue depth (nvme0n4) 00:10:05.850 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.850 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.850 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.850 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.850 fio-3.35 00:10:05.850 Starting 4 threads 00:10:09.140 21:23:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:09.140 fio: pid=68624, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:09.140 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=47685632, buflen=4096 00:10:09.140 21:23:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:09.140 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=80576512, buflen=4096 00:10:09.140 fio: pid=68623, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:09.140 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.140 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:09.140 fio: pid=68621, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:09.140 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=20639744, buflen=4096 00:10:09.140 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.140 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:09.416 fio: pid=68622, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:09.416 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=62595072, buflen=4096 00:10:09.416 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.416 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:09.416 00:10:09.416 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68621: Mon Jul 15 21:23:42 2024 00:10:09.416 read: IOPS=6747, BW=26.4MiB/s (27.6MB/s)(83.7MiB/3175msec) 00:10:09.416 slat (usec): min=6, max=18368, avg=10.61, stdev=185.60 00:10:09.416 clat (usec): min=95, max=6008, avg=136.95, stdev=96.07 00:10:09.416 lat (usec): min=105, max=18531, avg=147.56, stdev=209.51 00:10:09.416 clat percentiles (usec): 00:10:09.416 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 127], 00:10:09.416 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 135], 00:10:09.416 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 151], 00:10:09.416 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 1319], 99.95th=[ 2376], 00:10:09.416 | 99.99th=[ 4113] 00:10:09.416 bw ( KiB/s): min=25466, max=28072, per=33.54%, avg=27076.33, stdev=973.46, samples=6 00:10:09.416 iops : min= 6366, max= 7018, avg=6769.00, stdev=243.53, samples=6 00:10:09.416 lat (usec) : 100=0.07%, 250=99.71%, 500=0.07%, 750=0.04%, 1000=0.01% 00:10:09.416 lat (msec) : 2=0.04%, 4=0.05%, 10=0.01% 00:10:09.416 cpu : usr=1.17%, sys=4.98%, ctx=21434, majf=0, minf=1 00:10:09.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.416 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.416 issued rwts: total=21424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.416 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68622: Mon Jul 15 21:23:42 2024 00:10:09.416 read: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(59.7MiB/3370msec) 00:10:09.416 slat (usec): min=7, max=12407, avg=11.78, stdev=183.38 00:10:09.416 clat (usec): min=90, max=3743, avg=208.11, stdev=72.47 00:10:09.416 lat (usec): min=98, max=12576, avg=219.89, stdev=196.15 00:10:09.416 clat percentiles (usec): 00:10:09.416 | 1.00th=[ 101], 5.00th=[ 111], 10.00th=[ 122], 20.00th=[ 137], 00:10:09.416 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 231], 00:10:09.416 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 255], 00:10:09.416 | 99.00th=[ 281], 99.50th=[ 343], 99.90th=[ 742], 99.95th=[ 1205], 00:10:09.416 | 99.99th=[ 3556] 00:10:09.416 bw ( KiB/s): min=16264, max=20881, per=21.39%, avg=17272.17, stdev=1784.20, samples=6 00:10:09.416 iops : min= 4066, max= 5220, avg=4318.00, stdev=445.95, samples=6 00:10:09.416 lat (usec) : 100=0.82%, 250=91.12%, 500=7.85%, 750=0.10%, 1000=0.03% 00:10:09.416 lat (msec) : 2=0.05%, 4=0.02% 00:10:09.416 cpu : usr=0.65%, sys=3.41%, ctx=15291, majf=0, minf=1 00:10:09.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.416 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.416 issued rwts: total=15283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.416 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68623: Mon Jul 15 21:23:42 2024 00:10:09.416 read: IOPS=6559, BW=25.6MiB/s (26.9MB/s)(76.8MiB/2999msec) 00:10:09.416 slat (usec): min=6, max=10968, avg= 9.10, stdev=93.79 00:10:09.416 clat (usec): min=105, max=1936, avg=142.66, stdev=27.46 00:10:09.416 lat (usec): min=116, max=11147, avg=151.76, stdev=98.02 00:10:09.416 clat percentiles (usec): 00:10:09.416 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:10:09.416 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:10:09.416 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 161], 00:10:09.416 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 231], 99.95th=[ 375], 00:10:09.416 | 99.99th=[ 1795] 00:10:09.416 bw ( KiB/s): min=26088, max=26560, per=32.74%, avg=26435.20, stdev=201.01, samples=5 00:10:09.416 iops : min= 6522, max= 6640, avg=6608.80, stdev=50.25, samples=5 00:10:09.416 lat (usec) : 250=99.93%, 500=0.03%, 1000=0.01% 00:10:09.416 lat (msec) : 2=0.03% 00:10:09.416 cpu : usr=1.10%, sys=5.14%, ctx=19682, majf=0, minf=1 00:10:09.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.416 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.416 issued rwts: total=19673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.416 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68624: Mon Jul 15 21:23:42 2024 00:10:09.416 read: IOPS=4147, BW=16.2MiB/s (17.0MB/s)(45.5MiB/2807msec) 00:10:09.416 slat (nsec): min=6879, max=75031, avg=7584.36, stdev=1864.31 00:10:09.416 clat (usec): min=122, max=1521, avg=232.82, stdev=29.03 00:10:09.416 lat (usec): min=129, max=1528, avg=240.40, stdev=29.01 00:10:09.416 clat percentiles (usec): 00:10:09.416 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:10:09.416 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 235], 00:10:09.416 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 258], 00:10:09.416 | 99.00th=[ 281], 99.50th=[ 322], 99.90th=[ 478], 99.95th=[ 775], 00:10:09.416 | 99.99th=[ 1450] 00:10:09.416 bw ( KiB/s): min=16384, max=16944, per=20.57%, avg=16604.80, stdev=245.12, samples=5 00:10:09.416 iops : min= 4096, max= 4236, avg=4151.20, stdev=61.28, samples=5 00:10:09.417 lat (usec) : 250=90.29%, 500=9.61%, 750=0.04%, 1000=0.02% 00:10:09.417 lat (msec) : 2=0.03% 00:10:09.417 cpu : usr=0.43%, sys=3.21%, ctx=11643, majf=0, minf=2 00:10:09.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.417 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.417 issued rwts: total=11643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.417 00:10:09.417 Run status group 0 (all jobs): 00:10:09.417 READ: bw=78.8MiB/s (82.7MB/s), 16.2MiB/s-26.4MiB/s (17.0MB/s-27.6MB/s), io=266MiB (279MB), run=2807-3370msec 00:10:09.417 00:10:09.417 Disk stats (read/write): 00:10:09.417 nvme0n1: ios=21084/0, merge=0/0, ticks=2869/0, in_queue=2869, util=94.33% 00:10:09.417 nvme0n2: ios=13591/0, merge=0/0, ticks=2997/0, in_queue=2997, util=95.56% 00:10:09.417 nvme0n3: ios=18917/0, merge=0/0, ticks=2711/0, in_queue=2711, util=96.17% 00:10:09.417 nvme0n4: ios=10870/0, merge=0/0, ticks=2556/0, in_queue=2556, util=96.51% 00:10:09.417 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.417 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:09.674 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.674 21:23:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:09.933 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.933 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:10.192 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.192 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68581 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.451 nvmf hotplug test: fio failed as expected 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:10.451 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.710 rmmod nvme_tcp 00:10:10.710 rmmod nvme_fabrics 00:10:10.710 rmmod nvme_keyring 00:10:10.710 21:23:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68205 ']' 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68205 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68205 ']' 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68205 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:10.710 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68205 00:10:10.711 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:10.711 killing process with pid 68205 00:10:10.711 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:10.711 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68205' 00:10:10.711 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68205 00:10:10.711 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68205 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:10.969 ************************************ 00:10:10.969 END TEST nvmf_fio_target 00:10:10.969 ************************************ 00:10:10.969 00:10:10.969 real 0m18.092s 00:10:10.969 user 1m6.817s 00:10:10.969 sys 0m10.607s 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.969 21:23:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.228 21:23:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:11.228 21:23:44 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:11.228 21:23:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.228 21:23:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.228 21:23:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.228 ************************************ 00:10:11.228 START TEST nvmf_bdevio 00:10:11.228 ************************************ 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:11.228 * Looking for test storage... 00:10:11.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:11.228 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:11.229 Cannot find device "nvmf_tgt_br" 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:11.229 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.229 Cannot find device "nvmf_tgt_br2" 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:11.486 Cannot find device "nvmf_tgt_br" 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:11.486 Cannot find device "nvmf_tgt_br2" 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:11.486 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:11.744 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:11.744 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:11.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:10:11.745 00:10:11.745 --- 10.0.0.2 ping statistics --- 00:10:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.745 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:11.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:11.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:11.745 00:10:11.745 --- 10.0.0.3 ping statistics --- 00:10:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.745 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:11.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:10:11.745 00:10:11.745 --- 10.0.0.1 ping statistics --- 00:10:11.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.745 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68886 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68886 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 68886 ']' 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.745 21:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.745 [2024-07-15 21:23:45.024253] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:11.745 [2024-07-15 21:23:45.024305] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.004 [2024-07-15 21:23:45.166687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.004 [2024-07-15 21:23:45.243381] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.004 [2024-07-15 21:23:45.243652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.004 [2024-07-15 21:23:45.244017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.004 [2024-07-15 21:23:45.244032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.004 [2024-07-15 21:23:45.244040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.004 [2024-07-15 21:23:45.244333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:12.004 [2024-07-15 21:23:45.244564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:12.004 [2024-07-15 21:23:45.244721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:12.004 [2024-07-15 21:23:45.244756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.004 [2024-07-15 21:23:45.285339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.570 [2024-07-15 21:23:45.921231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.570 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.830 Malloc0 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.830 [2024-07-15 21:23:45.989180] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:12.830 { 00:10:12.830 "params": { 00:10:12.830 "name": "Nvme$subsystem", 00:10:12.830 "trtype": "$TEST_TRANSPORT", 00:10:12.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.830 "adrfam": "ipv4", 00:10:12.830 "trsvcid": "$NVMF_PORT", 00:10:12.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.830 "hdgst": ${hdgst:-false}, 00:10:12.830 "ddgst": ${ddgst:-false} 00:10:12.830 }, 00:10:12.830 "method": "bdev_nvme_attach_controller" 00:10:12.830 } 00:10:12.830 EOF 00:10:12.830 )") 00:10:12.830 21:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:12.830 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:12.830 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:12.830 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:12.830 "params": { 00:10:12.830 "name": "Nvme1", 00:10:12.830 "trtype": "tcp", 00:10:12.830 "traddr": "10.0.0.2", 00:10:12.830 "adrfam": "ipv4", 00:10:12.830 "trsvcid": "4420", 00:10:12.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:12.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:12.830 "hdgst": false, 00:10:12.830 "ddgst": false 00:10:12.830 }, 00:10:12.830 "method": "bdev_nvme_attach_controller" 00:10:12.830 }' 00:10:12.830 [2024-07-15 21:23:46.041371] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:10:12.830 [2024-07-15 21:23:46.041426] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68922 ] 00:10:12.830 [2024-07-15 21:23:46.181692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:13.135 [2024-07-15 21:23:46.258669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.135 [2024-07-15 21:23:46.258874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.135 [2024-07-15 21:23:46.258878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.135 [2024-07-15 21:23:46.308947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:13.135 I/O targets: 00:10:13.135 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:13.135 00:10:13.135 00:10:13.135 CUnit - A unit testing framework for C - Version 2.1-3 00:10:13.135 http://cunit.sourceforge.net/ 00:10:13.135 00:10:13.135 00:10:13.135 Suite: bdevio tests on: Nvme1n1 00:10:13.135 Test: blockdev write read block ...passed 00:10:13.135 Test: blockdev write zeroes read block ...passed 00:10:13.135 Test: blockdev write zeroes read no split ...passed 00:10:13.135 Test: blockdev write zeroes read split ...passed 00:10:13.135 Test: blockdev write zeroes read split partial ...passed 00:10:13.135 Test: blockdev reset ...[2024-07-15 21:23:46.444125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:13.135 [2024-07-15 21:23:46.444381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cda7c0 (9): Bad file descriptor 00:10:13.135 [2024-07-15 21:23:46.456804] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:13.135 passed 00:10:13.135 Test: blockdev write read 8 blocks ...passed 00:10:13.135 Test: blockdev write read size > 128k ...passed 00:10:13.135 Test: blockdev write read invalid size ...passed 00:10:13.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:13.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:13.135 Test: blockdev write read max offset ...passed 00:10:13.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:13.135 Test: blockdev writev readv 8 blocks ...passed 00:10:13.135 Test: blockdev writev readv 30 x 1block ...passed 00:10:13.135 Test: blockdev writev readv block ...passed 00:10:13.135 Test: blockdev writev readv size > 128k ...passed 00:10:13.135 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:13.135 Test: blockdev comparev and writev ...[2024-07-15 21:23:46.463203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.463341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.463364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.463374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.463611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.463623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.463636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.463645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.463867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.463878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.463891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.463900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.464112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.464123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.464136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:13.135 [2024-07-15 21:23:46.464144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:13.135 passed 00:10:13.135 Test: blockdev nvme passthru rw ...passed 00:10:13.135 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:23:46.464744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:13.135 [2024-07-15 21:23:46.464764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.464845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:13.135 [2024-07-15 21:23:46.464856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.464933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:13.135 [2024-07-15 21:23:46.464944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:13.135 [2024-07-15 21:23:46.465015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:13.135 [2024-07-15 21:23:46.465026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:13.135 passed 00:10:13.135 Test: blockdev nvme admin passthru ...passed 00:10:13.135 Test: blockdev copy ...passed 00:10:13.135 00:10:13.135 Run Summary: Type Total Ran Passed Failed Inactive 00:10:13.135 suites 1 1 n/a 0 0 00:10:13.136 tests 23 23 23 0 0 00:10:13.136 asserts 152 152 152 0 n/a 00:10:13.136 00:10:13.136 Elapsed time = 0.136 seconds 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:13.415 rmmod nvme_tcp 00:10:13.415 rmmod nvme_fabrics 00:10:13.415 rmmod nvme_keyring 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68886 ']' 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68886 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 68886 ']' 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 68886 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.415 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68886 00:10:13.674 killing process with pid 68886 00:10:13.674 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:13.674 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:13.674 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68886' 00:10:13.674 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 68886 00:10:13.674 21:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 68886 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.674 21:23:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.933 21:23:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:13.933 ************************************ 00:10:13.933 END TEST nvmf_bdevio 00:10:13.933 ************************************ 00:10:13.933 00:10:13.933 real 0m2.697s 00:10:13.933 user 0m8.144s 00:10:13.933 sys 0m0.864s 00:10:13.933 21:23:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.934 21:23:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.934 21:23:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:13.934 21:23:47 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:13.934 21:23:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:13.934 21:23:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.934 21:23:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.934 ************************************ 00:10:13.934 START TEST nvmf_auth_target 00:10:13.934 ************************************ 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:13.934 * Looking for test storage... 00:10:13.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.934 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:14.193 Cannot find device "nvmf_tgt_br" 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.193 Cannot find device "nvmf_tgt_br2" 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:14.193 Cannot find device "nvmf_tgt_br" 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:14.193 Cannot find device "nvmf_tgt_br2" 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:14.193 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:14.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:14.452 00:10:14.452 --- 10.0.0.2 ping statistics --- 00:10:14.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.452 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:14.452 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.452 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:14.452 00:10:14.452 --- 10.0.0.3 ping statistics --- 00:10:14.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.452 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:10:14.452 00:10:14.452 --- 10.0.0.1 ping statistics --- 00:10:14.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.452 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.452 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69099 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69099 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69099 ']' 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.453 21:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69131 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bc250471c093b7e3325ce00f038c7edc0b202bf8accc5188 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.akQ 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bc250471c093b7e3325ce00f038c7edc0b202bf8accc5188 0 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bc250471c093b7e3325ce00f038c7edc0b202bf8accc5188 0 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bc250471c093b7e3325ce00f038c7edc0b202bf8accc5188 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.akQ 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.akQ 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.akQ 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a3c8cffd12da00fdfda056ea4afb0e35c3a119d86453fcad4b30ef8f0e398ac1 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.W5v 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a3c8cffd12da00fdfda056ea4afb0e35c3a119d86453fcad4b30ef8f0e398ac1 3 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a3c8cffd12da00fdfda056ea4afb0e35c3a119d86453fcad4b30ef8f0e398ac1 3 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a3c8cffd12da00fdfda056ea4afb0e35c3a119d86453fcad4b30ef8f0e398ac1 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:15.388 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.W5v 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.W5v 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.W5v 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=33fdf74e575e6867a5a1b3c06f61a142 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4wz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 33fdf74e575e6867a5a1b3c06f61a142 1 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 33fdf74e575e6867a5a1b3c06f61a142 1 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=33fdf74e575e6867a5a1b3c06f61a142 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4wz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4wz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.4wz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1fd378fd5ed485eeb8474317d7811ab985dc0e8775f6793b 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wJz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1fd378fd5ed485eeb8474317d7811ab985dc0e8775f6793b 2 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1fd378fd5ed485eeb8474317d7811ab985dc0e8775f6793b 2 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1fd378fd5ed485eeb8474317d7811ab985dc0e8775f6793b 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wJz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wJz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.wJz 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=522a59b6350ebb4691a08ea39733dcc02455a57d9fbbddaa 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OGo 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 522a59b6350ebb4691a08ea39733dcc02455a57d9fbbddaa 2 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 522a59b6350ebb4691a08ea39733dcc02455a57d9fbbddaa 2 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=522a59b6350ebb4691a08ea39733dcc02455a57d9fbbddaa 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:15.648 21:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OGo 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OGo 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.OGo 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:15.648 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f53b682d943776f162bf49300e77a213 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BHK 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f53b682d943776f162bf49300e77a213 1 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f53b682d943776f162bf49300e77a213 1 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f53b682d943776f162bf49300e77a213 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BHK 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BHK 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.BHK 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=51c1b3555417cdf0b08b0839d96255186ed15acd39504094d9f8465a0998158c 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FZt 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 51c1b3555417cdf0b08b0839d96255186ed15acd39504094d9f8465a0998158c 3 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 51c1b3555417cdf0b08b0839d96255186ed15acd39504094d9f8465a0998158c 3 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=51c1b3555417cdf0b08b0839d96255186ed15acd39504094d9f8465a0998158c 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FZt 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FZt 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.FZt 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69099 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69099 ']' 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.908 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69131 /var/tmp/host.sock 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69131 ']' 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.akQ 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.167 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.akQ 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.akQ 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.W5v ]] 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W5v 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W5v 00:10:16.425 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W5v 00:10:16.684 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:16.684 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4wz 00:10:16.684 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.684 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.684 21:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.684 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4wz 00:10:16.684 21:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4wz 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.wJz ]] 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wJz 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wJz 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.wJz 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OGo 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OGo 00:10:16.943 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OGo 00:10:17.202 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.BHK ]] 00:10:17.202 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BHK 00:10:17.202 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.202 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.202 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.202 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BHK 00:10:17.202 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BHK 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.FZt 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.FZt 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.FZt 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.481 21:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.740 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.998 00:10:17.998 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:17.998 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:17.998 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.256 { 00:10:18.256 "cntlid": 1, 00:10:18.256 "qid": 0, 00:10:18.256 "state": "enabled", 00:10:18.256 "thread": "nvmf_tgt_poll_group_000", 00:10:18.256 "listen_address": { 00:10:18.256 "trtype": "TCP", 00:10:18.256 "adrfam": "IPv4", 00:10:18.256 "traddr": "10.0.0.2", 00:10:18.256 "trsvcid": "4420" 00:10:18.256 }, 00:10:18.256 "peer_address": { 00:10:18.256 "trtype": "TCP", 00:10:18.256 "adrfam": "IPv4", 00:10:18.256 "traddr": "10.0.0.1", 00:10:18.256 "trsvcid": "42432" 00:10:18.256 }, 00:10:18.256 "auth": { 00:10:18.256 "state": "completed", 00:10:18.256 "digest": "sha256", 00:10:18.256 "dhgroup": "null" 00:10:18.256 } 00:10:18.256 } 00:10:18.256 ]' 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:18.256 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.515 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.515 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.515 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.515 21:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.701 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.701 { 00:10:22.701 "cntlid": 3, 00:10:22.701 "qid": 0, 00:10:22.701 "state": "enabled", 00:10:22.701 "thread": "nvmf_tgt_poll_group_000", 00:10:22.701 "listen_address": { 00:10:22.701 "trtype": "TCP", 00:10:22.701 "adrfam": "IPv4", 00:10:22.701 "traddr": "10.0.0.2", 00:10:22.701 "trsvcid": "4420" 00:10:22.701 }, 00:10:22.701 "peer_address": { 00:10:22.701 "trtype": "TCP", 00:10:22.701 "adrfam": "IPv4", 00:10:22.701 "traddr": "10.0.0.1", 00:10:22.701 "trsvcid": "55998" 00:10:22.701 }, 00:10:22.701 "auth": { 00:10:22.701 "state": "completed", 00:10:22.701 "digest": "sha256", 00:10:22.701 "dhgroup": "null" 00:10:22.701 } 00:10:22.701 } 00:10:22.701 ]' 00:10:22.701 21:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.701 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.701 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.701 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:22.701 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:22.960 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.960 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.960 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.960 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:23.526 21:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:23.784 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:23.784 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.784 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.785 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.043 00:10:24.043 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.043 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.043 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.302 { 00:10:24.302 "cntlid": 5, 00:10:24.302 "qid": 0, 00:10:24.302 "state": "enabled", 00:10:24.302 "thread": "nvmf_tgt_poll_group_000", 00:10:24.302 "listen_address": { 00:10:24.302 "trtype": "TCP", 00:10:24.302 "adrfam": "IPv4", 00:10:24.302 "traddr": "10.0.0.2", 00:10:24.302 "trsvcid": "4420" 00:10:24.302 }, 00:10:24.302 "peer_address": { 00:10:24.302 "trtype": "TCP", 00:10:24.302 "adrfam": "IPv4", 00:10:24.302 "traddr": "10.0.0.1", 00:10:24.302 "trsvcid": "56024" 00:10:24.302 }, 00:10:24.302 "auth": { 00:10:24.302 "state": "completed", 00:10:24.302 "digest": "sha256", 00:10:24.302 "dhgroup": "null" 00:10:24.302 } 00:10:24.302 } 00:10:24.302 ]' 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.302 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.560 21:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:25.166 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:25.425 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:25.685 00:10:25.685 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:25.685 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.685 21:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:25.945 { 00:10:25.945 "cntlid": 7, 00:10:25.945 "qid": 0, 00:10:25.945 "state": "enabled", 00:10:25.945 "thread": "nvmf_tgt_poll_group_000", 00:10:25.945 "listen_address": { 00:10:25.945 "trtype": "TCP", 00:10:25.945 "adrfam": "IPv4", 00:10:25.945 "traddr": "10.0.0.2", 00:10:25.945 "trsvcid": "4420" 00:10:25.945 }, 00:10:25.945 "peer_address": { 00:10:25.945 "trtype": "TCP", 00:10:25.945 "adrfam": "IPv4", 00:10:25.945 "traddr": "10.0.0.1", 00:10:25.945 "trsvcid": "56044" 00:10:25.945 }, 00:10:25.945 "auth": { 00:10:25.945 "state": "completed", 00:10:25.945 "digest": "sha256", 00:10:25.945 "dhgroup": "null" 00:10:25.945 } 00:10:25.945 } 00:10:25.945 ]' 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.945 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.205 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:26.773 21:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.032 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.292 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:27.292 { 00:10:27.292 "cntlid": 9, 00:10:27.292 "qid": 0, 00:10:27.292 "state": "enabled", 00:10:27.292 "thread": "nvmf_tgt_poll_group_000", 00:10:27.292 "listen_address": { 00:10:27.292 "trtype": "TCP", 00:10:27.292 "adrfam": "IPv4", 00:10:27.292 "traddr": "10.0.0.2", 00:10:27.292 "trsvcid": "4420" 00:10:27.292 }, 00:10:27.292 "peer_address": { 00:10:27.292 "trtype": "TCP", 00:10:27.292 "adrfam": "IPv4", 00:10:27.292 "traddr": "10.0.0.1", 00:10:27.292 "trsvcid": "56076" 00:10:27.292 }, 00:10:27.292 "auth": { 00:10:27.292 "state": "completed", 00:10:27.292 "digest": "sha256", 00:10:27.292 "dhgroup": "ffdhe2048" 00:10:27.292 } 00:10:27.292 } 00:10:27.292 ]' 00:10:27.292 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:27.552 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.552 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:27.552 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:27.552 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:27.552 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.552 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.552 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.810 21:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.378 21:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.636 00:10:28.894 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:28.894 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.895 { 00:10:28.895 "cntlid": 11, 00:10:28.895 "qid": 0, 00:10:28.895 "state": "enabled", 00:10:28.895 "thread": "nvmf_tgt_poll_group_000", 00:10:28.895 "listen_address": { 00:10:28.895 "trtype": "TCP", 00:10:28.895 "adrfam": "IPv4", 00:10:28.895 "traddr": "10.0.0.2", 00:10:28.895 "trsvcid": "4420" 00:10:28.895 }, 00:10:28.895 "peer_address": { 00:10:28.895 "trtype": "TCP", 00:10:28.895 "adrfam": "IPv4", 00:10:28.895 "traddr": "10.0.0.1", 00:10:28.895 "trsvcid": "56090" 00:10:28.895 }, 00:10:28.895 "auth": { 00:10:28.895 "state": "completed", 00:10:28.895 "digest": "sha256", 00:10:28.895 "dhgroup": "ffdhe2048" 00:10:28.895 } 00:10:28.895 } 00:10:28.895 ]' 00:10:28.895 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:29.154 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.154 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:29.154 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:29.154 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:29.154 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.154 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.154 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.412 21:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.981 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.239 00:10:30.239 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.239 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.239 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.497 { 00:10:30.497 "cntlid": 13, 00:10:30.497 "qid": 0, 00:10:30.497 "state": "enabled", 00:10:30.497 "thread": "nvmf_tgt_poll_group_000", 00:10:30.497 "listen_address": { 00:10:30.497 "trtype": "TCP", 00:10:30.497 "adrfam": "IPv4", 00:10:30.497 "traddr": "10.0.0.2", 00:10:30.497 "trsvcid": "4420" 00:10:30.497 }, 00:10:30.497 "peer_address": { 00:10:30.497 "trtype": "TCP", 00:10:30.497 "adrfam": "IPv4", 00:10:30.497 "traddr": "10.0.0.1", 00:10:30.497 "trsvcid": "56134" 00:10:30.497 }, 00:10:30.497 "auth": { 00:10:30.497 "state": "completed", 00:10:30.497 "digest": "sha256", 00:10:30.497 "dhgroup": "ffdhe2048" 00:10:30.497 } 00:10:30.497 } 00:10:30.497 ]' 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.497 21:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.755 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:31.321 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:31.580 21:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:31.839 00:10:31.839 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:31.839 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:31.839 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.098 { 00:10:32.098 "cntlid": 15, 00:10:32.098 "qid": 0, 00:10:32.098 "state": "enabled", 00:10:32.098 "thread": "nvmf_tgt_poll_group_000", 00:10:32.098 "listen_address": { 00:10:32.098 "trtype": "TCP", 00:10:32.098 "adrfam": "IPv4", 00:10:32.098 "traddr": "10.0.0.2", 00:10:32.098 "trsvcid": "4420" 00:10:32.098 }, 00:10:32.098 "peer_address": { 00:10:32.098 "trtype": "TCP", 00:10:32.098 "adrfam": "IPv4", 00:10:32.098 "traddr": "10.0.0.1", 00:10:32.098 "trsvcid": "56150" 00:10:32.098 }, 00:10:32.098 "auth": { 00:10:32.098 "state": "completed", 00:10:32.098 "digest": "sha256", 00:10:32.098 "dhgroup": "ffdhe2048" 00:10:32.098 } 00:10:32.098 } 00:10:32.098 ]' 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.098 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.356 21:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:10:32.924 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.924 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:32.924 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.925 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.925 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.925 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:32.925 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:32.925 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:32.925 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.184 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.444 00:10:33.444 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.444 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.444 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:33.704 { 00:10:33.704 "cntlid": 17, 00:10:33.704 "qid": 0, 00:10:33.704 "state": "enabled", 00:10:33.704 "thread": "nvmf_tgt_poll_group_000", 00:10:33.704 "listen_address": { 00:10:33.704 "trtype": "TCP", 00:10:33.704 "adrfam": "IPv4", 00:10:33.704 "traddr": "10.0.0.2", 00:10:33.704 "trsvcid": "4420" 00:10:33.704 }, 00:10:33.704 "peer_address": { 00:10:33.704 "trtype": "TCP", 00:10:33.704 "adrfam": "IPv4", 00:10:33.704 "traddr": "10.0.0.1", 00:10:33.704 "trsvcid": "48208" 00:10:33.704 }, 00:10:33.704 "auth": { 00:10:33.704 "state": "completed", 00:10:33.704 "digest": "sha256", 00:10:33.704 "dhgroup": "ffdhe3072" 00:10:33.704 } 00:10:33.704 } 00:10:33.704 ]' 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:33.704 21:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:33.704 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.704 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.704 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.963 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:34.530 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.789 21:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.048 00:10:35.048 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.048 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.048 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.307 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.307 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.307 21:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.307 21:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.307 21:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.307 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.307 { 00:10:35.307 "cntlid": 19, 00:10:35.308 "qid": 0, 00:10:35.308 "state": "enabled", 00:10:35.308 "thread": "nvmf_tgt_poll_group_000", 00:10:35.308 "listen_address": { 00:10:35.308 "trtype": "TCP", 00:10:35.308 "adrfam": "IPv4", 00:10:35.308 "traddr": "10.0.0.2", 00:10:35.308 "trsvcid": "4420" 00:10:35.308 }, 00:10:35.308 "peer_address": { 00:10:35.308 "trtype": "TCP", 00:10:35.308 "adrfam": "IPv4", 00:10:35.308 "traddr": "10.0.0.1", 00:10:35.308 "trsvcid": "48224" 00:10:35.308 }, 00:10:35.308 "auth": { 00:10:35.308 "state": "completed", 00:10:35.308 "digest": "sha256", 00:10:35.308 "dhgroup": "ffdhe3072" 00:10:35.308 } 00:10:35.308 } 00:10:35.308 ]' 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.308 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.567 21:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:36.173 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:36.432 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.433 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.692 00:10:36.692 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.692 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.692 21:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.951 { 00:10:36.951 "cntlid": 21, 00:10:36.951 "qid": 0, 00:10:36.951 "state": "enabled", 00:10:36.951 "thread": "nvmf_tgt_poll_group_000", 00:10:36.951 "listen_address": { 00:10:36.951 "trtype": "TCP", 00:10:36.951 "adrfam": "IPv4", 00:10:36.951 "traddr": "10.0.0.2", 00:10:36.951 "trsvcid": "4420" 00:10:36.951 }, 00:10:36.951 "peer_address": { 00:10:36.951 "trtype": "TCP", 00:10:36.951 "adrfam": "IPv4", 00:10:36.951 "traddr": "10.0.0.1", 00:10:36.951 "trsvcid": "48254" 00:10:36.951 }, 00:10:36.951 "auth": { 00:10:36.951 "state": "completed", 00:10:36.951 "digest": "sha256", 00:10:36.951 "dhgroup": "ffdhe3072" 00:10:36.951 } 00:10:36.951 } 00:10:36.951 ]' 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.951 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.210 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:37.778 21:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:38.037 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:38.037 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.037 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.037 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:38.037 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:38.037 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.038 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:10:38.038 21:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.038 21:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.038 21:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.038 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.038 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.297 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.297 { 00:10:38.297 "cntlid": 23, 00:10:38.297 "qid": 0, 00:10:38.297 "state": "enabled", 00:10:38.297 "thread": "nvmf_tgt_poll_group_000", 00:10:38.297 "listen_address": { 00:10:38.297 "trtype": "TCP", 00:10:38.297 "adrfam": "IPv4", 00:10:38.297 "traddr": "10.0.0.2", 00:10:38.297 "trsvcid": "4420" 00:10:38.297 }, 00:10:38.297 "peer_address": { 00:10:38.297 "trtype": "TCP", 00:10:38.297 "adrfam": "IPv4", 00:10:38.297 "traddr": "10.0.0.1", 00:10:38.297 "trsvcid": "48280" 00:10:38.297 }, 00:10:38.297 "auth": { 00:10:38.297 "state": "completed", 00:10:38.297 "digest": "sha256", 00:10:38.297 "dhgroup": "ffdhe3072" 00:10:38.297 } 00:10:38.297 } 00:10:38.297 ]' 00:10:38.297 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.557 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.557 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.557 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:38.557 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.557 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.557 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.557 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.816 21:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.383 21:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.642 21:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.642 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.642 21:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.902 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.902 { 00:10:39.902 "cntlid": 25, 00:10:39.902 "qid": 0, 00:10:39.902 "state": "enabled", 00:10:39.902 "thread": "nvmf_tgt_poll_group_000", 00:10:39.902 "listen_address": { 00:10:39.902 "trtype": "TCP", 00:10:39.902 "adrfam": "IPv4", 00:10:39.902 "traddr": "10.0.0.2", 00:10:39.902 "trsvcid": "4420" 00:10:39.902 }, 00:10:39.902 "peer_address": { 00:10:39.902 "trtype": "TCP", 00:10:39.902 "adrfam": "IPv4", 00:10:39.902 "traddr": "10.0.0.1", 00:10:39.902 "trsvcid": "48310" 00:10:39.902 }, 00:10:39.902 "auth": { 00:10:39.902 "state": "completed", 00:10:39.902 "digest": "sha256", 00:10:39.902 "dhgroup": "ffdhe4096" 00:10:39.902 } 00:10:39.902 } 00:10:39.902 ]' 00:10:39.902 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.160 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.160 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.160 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.160 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.160 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.160 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.160 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.418 21:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.985 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.244 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.244 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.244 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.527 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.527 { 00:10:41.527 "cntlid": 27, 00:10:41.527 "qid": 0, 00:10:41.527 "state": "enabled", 00:10:41.527 "thread": "nvmf_tgt_poll_group_000", 00:10:41.527 "listen_address": { 00:10:41.527 "trtype": "TCP", 00:10:41.527 "adrfam": "IPv4", 00:10:41.527 "traddr": "10.0.0.2", 00:10:41.527 "trsvcid": "4420" 00:10:41.527 }, 00:10:41.527 "peer_address": { 00:10:41.527 "trtype": "TCP", 00:10:41.527 "adrfam": "IPv4", 00:10:41.527 "traddr": "10.0.0.1", 00:10:41.527 "trsvcid": "48326" 00:10:41.527 }, 00:10:41.527 "auth": { 00:10:41.527 "state": "completed", 00:10:41.527 "digest": "sha256", 00:10:41.527 "dhgroup": "ffdhe4096" 00:10:41.527 } 00:10:41.527 } 00:10:41.527 ]' 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.527 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.800 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.800 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:41.800 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.800 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.800 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.800 21:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.059 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.627 21:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.886 00:10:42.887 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.887 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.887 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.146 { 00:10:43.146 "cntlid": 29, 00:10:43.146 "qid": 0, 00:10:43.146 "state": "enabled", 00:10:43.146 "thread": "nvmf_tgt_poll_group_000", 00:10:43.146 "listen_address": { 00:10:43.146 "trtype": "TCP", 00:10:43.146 "adrfam": "IPv4", 00:10:43.146 "traddr": "10.0.0.2", 00:10:43.146 "trsvcid": "4420" 00:10:43.146 }, 00:10:43.146 "peer_address": { 00:10:43.146 "trtype": "TCP", 00:10:43.146 "adrfam": "IPv4", 00:10:43.146 "traddr": "10.0.0.1", 00:10:43.146 "trsvcid": "34576" 00:10:43.146 }, 00:10:43.146 "auth": { 00:10:43.146 "state": "completed", 00:10:43.146 "digest": "sha256", 00:10:43.146 "dhgroup": "ffdhe4096" 00:10:43.146 } 00:10:43.146 } 00:10:43.146 ]' 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.146 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:43.404 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.404 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.405 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.405 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.405 21:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:10:43.971 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.971 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:43.972 21:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.972 21:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.231 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:44.490 00:10:44.490 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.490 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.490 21:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.748 { 00:10:44.748 "cntlid": 31, 00:10:44.748 "qid": 0, 00:10:44.748 "state": "enabled", 00:10:44.748 "thread": "nvmf_tgt_poll_group_000", 00:10:44.748 "listen_address": { 00:10:44.748 "trtype": "TCP", 00:10:44.748 "adrfam": "IPv4", 00:10:44.748 "traddr": "10.0.0.2", 00:10:44.748 "trsvcid": "4420" 00:10:44.748 }, 00:10:44.748 "peer_address": { 00:10:44.748 "trtype": "TCP", 00:10:44.748 "adrfam": "IPv4", 00:10:44.748 "traddr": "10.0.0.1", 00:10:44.748 "trsvcid": "34604" 00:10:44.748 }, 00:10:44.748 "auth": { 00:10:44.748 "state": "completed", 00:10:44.748 "digest": "sha256", 00:10:44.748 "dhgroup": "ffdhe4096" 00:10:44.748 } 00:10:44.748 } 00:10:44.748 ]' 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.748 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.007 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:45.007 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.007 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.007 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.007 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.265 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.831 21:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.831 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.396 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.396 { 00:10:46.396 "cntlid": 33, 00:10:46.396 "qid": 0, 00:10:46.396 "state": "enabled", 00:10:46.396 "thread": "nvmf_tgt_poll_group_000", 00:10:46.396 "listen_address": { 00:10:46.396 "trtype": "TCP", 00:10:46.396 "adrfam": "IPv4", 00:10:46.396 "traddr": "10.0.0.2", 00:10:46.396 "trsvcid": "4420" 00:10:46.396 }, 00:10:46.396 "peer_address": { 00:10:46.396 "trtype": "TCP", 00:10:46.396 "adrfam": "IPv4", 00:10:46.396 "traddr": "10.0.0.1", 00:10:46.396 "trsvcid": "34640" 00:10:46.396 }, 00:10:46.396 "auth": { 00:10:46.396 "state": "completed", 00:10:46.396 "digest": "sha256", 00:10:46.396 "dhgroup": "ffdhe6144" 00:10:46.396 } 00:10:46.396 } 00:10:46.396 ]' 00:10:46.396 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.654 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.654 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.654 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:46.654 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.654 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.654 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.654 21:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.912 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.477 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.736 21:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.024 00:10:48.024 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.024 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.024 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.322 { 00:10:48.322 "cntlid": 35, 00:10:48.322 "qid": 0, 00:10:48.322 "state": "enabled", 00:10:48.322 "thread": "nvmf_tgt_poll_group_000", 00:10:48.322 "listen_address": { 00:10:48.322 "trtype": "TCP", 00:10:48.322 "adrfam": "IPv4", 00:10:48.322 "traddr": "10.0.0.2", 00:10:48.322 "trsvcid": "4420" 00:10:48.322 }, 00:10:48.322 "peer_address": { 00:10:48.322 "trtype": "TCP", 00:10:48.322 "adrfam": "IPv4", 00:10:48.322 "traddr": "10.0.0.1", 00:10:48.322 "trsvcid": "34670" 00:10:48.322 }, 00:10:48.322 "auth": { 00:10:48.322 "state": "completed", 00:10:48.322 "digest": "sha256", 00:10:48.322 "dhgroup": "ffdhe6144" 00:10:48.322 } 00:10:48.322 } 00:10:48.322 ]' 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.322 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.580 21:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:49.150 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.410 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.670 00:10:49.671 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.671 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.671 21:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.930 { 00:10:49.930 "cntlid": 37, 00:10:49.930 "qid": 0, 00:10:49.930 "state": "enabled", 00:10:49.930 "thread": "nvmf_tgt_poll_group_000", 00:10:49.930 "listen_address": { 00:10:49.930 "trtype": "TCP", 00:10:49.930 "adrfam": "IPv4", 00:10:49.930 "traddr": "10.0.0.2", 00:10:49.930 "trsvcid": "4420" 00:10:49.930 }, 00:10:49.930 "peer_address": { 00:10:49.930 "trtype": "TCP", 00:10:49.930 "adrfam": "IPv4", 00:10:49.930 "traddr": "10.0.0.1", 00:10:49.930 "trsvcid": "34714" 00:10:49.930 }, 00:10:49.930 "auth": { 00:10:49.930 "state": "completed", 00:10:49.930 "digest": "sha256", 00:10:49.930 "dhgroup": "ffdhe6144" 00:10:49.930 } 00:10:49.930 } 00:10:49.930 ]' 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.930 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.188 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:10:50.756 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.756 21:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:50.756 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.756 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.756 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.756 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.756 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:50.756 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.015 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.274 00:10:51.274 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.274 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.274 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.533 { 00:10:51.533 "cntlid": 39, 00:10:51.533 "qid": 0, 00:10:51.533 "state": "enabled", 00:10:51.533 "thread": "nvmf_tgt_poll_group_000", 00:10:51.533 "listen_address": { 00:10:51.533 "trtype": "TCP", 00:10:51.533 "adrfam": "IPv4", 00:10:51.533 "traddr": "10.0.0.2", 00:10:51.533 "trsvcid": "4420" 00:10:51.533 }, 00:10:51.533 "peer_address": { 00:10:51.533 "trtype": "TCP", 00:10:51.533 "adrfam": "IPv4", 00:10:51.533 "traddr": "10.0.0.1", 00:10:51.533 "trsvcid": "34738" 00:10:51.533 }, 00:10:51.533 "auth": { 00:10:51.533 "state": "completed", 00:10:51.533 "digest": "sha256", 00:10:51.533 "dhgroup": "ffdhe6144" 00:10:51.533 } 00:10:51.533 } 00:10:51.533 ]' 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:51.533 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.792 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.792 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.792 21:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.792 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.360 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.619 21:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.188 00:10:53.188 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.188 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.188 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.447 { 00:10:53.447 "cntlid": 41, 00:10:53.447 "qid": 0, 00:10:53.447 "state": "enabled", 00:10:53.447 "thread": "nvmf_tgt_poll_group_000", 00:10:53.447 "listen_address": { 00:10:53.447 "trtype": "TCP", 00:10:53.447 "adrfam": "IPv4", 00:10:53.447 "traddr": "10.0.0.2", 00:10:53.447 "trsvcid": "4420" 00:10:53.447 }, 00:10:53.447 "peer_address": { 00:10:53.447 "trtype": "TCP", 00:10:53.447 "adrfam": "IPv4", 00:10:53.447 "traddr": "10.0.0.1", 00:10:53.447 "trsvcid": "46074" 00:10:53.447 }, 00:10:53.447 "auth": { 00:10:53.447 "state": "completed", 00:10:53.447 "digest": "sha256", 00:10:53.447 "dhgroup": "ffdhe8192" 00:10:53.447 } 00:10:53.447 } 00:10:53.447 ]' 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.447 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.707 21:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.276 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.536 21:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.104 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.104 { 00:10:55.104 "cntlid": 43, 00:10:55.104 "qid": 0, 00:10:55.104 "state": "enabled", 00:10:55.104 "thread": "nvmf_tgt_poll_group_000", 00:10:55.104 "listen_address": { 00:10:55.104 "trtype": "TCP", 00:10:55.104 "adrfam": "IPv4", 00:10:55.104 "traddr": "10.0.0.2", 00:10:55.104 "trsvcid": "4420" 00:10:55.104 }, 00:10:55.104 "peer_address": { 00:10:55.104 "trtype": "TCP", 00:10:55.104 "adrfam": "IPv4", 00:10:55.104 "traddr": "10.0.0.1", 00:10:55.104 "trsvcid": "46102" 00:10:55.104 }, 00:10:55.104 "auth": { 00:10:55.104 "state": "completed", 00:10:55.104 "digest": "sha256", 00:10:55.104 "dhgroup": "ffdhe8192" 00:10:55.104 } 00:10:55.104 } 00:10:55.104 ]' 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.104 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.363 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:55.363 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.363 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.363 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.363 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.623 21:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.191 21:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.192 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.192 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.759 00:10:56.759 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.759 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.759 21:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.018 { 00:10:57.018 "cntlid": 45, 00:10:57.018 "qid": 0, 00:10:57.018 "state": "enabled", 00:10:57.018 "thread": "nvmf_tgt_poll_group_000", 00:10:57.018 "listen_address": { 00:10:57.018 "trtype": "TCP", 00:10:57.018 "adrfam": "IPv4", 00:10:57.018 "traddr": "10.0.0.2", 00:10:57.018 "trsvcid": "4420" 00:10:57.018 }, 00:10:57.018 "peer_address": { 00:10:57.018 "trtype": "TCP", 00:10:57.018 "adrfam": "IPv4", 00:10:57.018 "traddr": "10.0.0.1", 00:10:57.018 "trsvcid": "46132" 00:10:57.018 }, 00:10:57.018 "auth": { 00:10:57.018 "state": "completed", 00:10:57.018 "digest": "sha256", 00:10:57.018 "dhgroup": "ffdhe8192" 00:10:57.018 } 00:10:57.018 } 00:10:57.018 ]' 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.018 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.277 21:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.865 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.124 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.694 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.694 { 00:10:58.694 "cntlid": 47, 00:10:58.694 "qid": 0, 00:10:58.694 "state": "enabled", 00:10:58.694 "thread": "nvmf_tgt_poll_group_000", 00:10:58.694 "listen_address": { 00:10:58.694 "trtype": "TCP", 00:10:58.694 "adrfam": "IPv4", 00:10:58.694 "traddr": "10.0.0.2", 00:10:58.694 "trsvcid": "4420" 00:10:58.694 }, 00:10:58.694 "peer_address": { 00:10:58.694 "trtype": "TCP", 00:10:58.694 "adrfam": "IPv4", 00:10:58.694 "traddr": "10.0.0.1", 00:10:58.694 "trsvcid": "46166" 00:10:58.694 }, 00:10:58.694 "auth": { 00:10:58.694 "state": "completed", 00:10:58.694 "digest": "sha256", 00:10:58.694 "dhgroup": "ffdhe8192" 00:10:58.694 } 00:10:58.694 } 00:10:58.694 ]' 00:10:58.694 21:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.694 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.694 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.694 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:58.694 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.956 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.956 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.956 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.956 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:59.524 21:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.783 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.042 00:11:00.042 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.042 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.042 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.300 { 00:11:00.300 "cntlid": 49, 00:11:00.300 "qid": 0, 00:11:00.300 "state": "enabled", 00:11:00.300 "thread": "nvmf_tgt_poll_group_000", 00:11:00.300 "listen_address": { 00:11:00.300 "trtype": "TCP", 00:11:00.300 "adrfam": "IPv4", 00:11:00.300 "traddr": "10.0.0.2", 00:11:00.300 "trsvcid": "4420" 00:11:00.300 }, 00:11:00.300 "peer_address": { 00:11:00.300 "trtype": "TCP", 00:11:00.300 "adrfam": "IPv4", 00:11:00.300 "traddr": "10.0.0.1", 00:11:00.300 "trsvcid": "46200" 00:11:00.300 }, 00:11:00.300 "auth": { 00:11:00.300 "state": "completed", 00:11:00.300 "digest": "sha384", 00:11:00.300 "dhgroup": "null" 00:11:00.300 } 00:11:00.300 } 00:11:00.300 ]' 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.300 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.559 21:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.126 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.385 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.644 00:11:01.644 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.644 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.644 21:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.903 { 00:11:01.903 "cntlid": 51, 00:11:01.903 "qid": 0, 00:11:01.903 "state": "enabled", 00:11:01.903 "thread": "nvmf_tgt_poll_group_000", 00:11:01.903 "listen_address": { 00:11:01.903 "trtype": "TCP", 00:11:01.903 "adrfam": "IPv4", 00:11:01.903 "traddr": "10.0.0.2", 00:11:01.903 "trsvcid": "4420" 00:11:01.903 }, 00:11:01.903 "peer_address": { 00:11:01.903 "trtype": "TCP", 00:11:01.903 "adrfam": "IPv4", 00:11:01.903 "traddr": "10.0.0.1", 00:11:01.903 "trsvcid": "46218" 00:11:01.903 }, 00:11:01.903 "auth": { 00:11:01.903 "state": "completed", 00:11:01.903 "digest": "sha384", 00:11:01.903 "dhgroup": "null" 00:11:01.903 } 00:11:01.903 } 00:11:01.903 ]' 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.903 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.162 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.730 21:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.990 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.250 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.250 { 00:11:03.250 "cntlid": 53, 00:11:03.250 "qid": 0, 00:11:03.250 "state": "enabled", 00:11:03.250 "thread": "nvmf_tgt_poll_group_000", 00:11:03.250 "listen_address": { 00:11:03.250 "trtype": "TCP", 00:11:03.250 "adrfam": "IPv4", 00:11:03.250 "traddr": "10.0.0.2", 00:11:03.250 "trsvcid": "4420" 00:11:03.250 }, 00:11:03.250 "peer_address": { 00:11:03.250 "trtype": "TCP", 00:11:03.250 "adrfam": "IPv4", 00:11:03.250 "traddr": "10.0.0.1", 00:11:03.250 "trsvcid": "36428" 00:11:03.250 }, 00:11:03.250 "auth": { 00:11:03.250 "state": "completed", 00:11:03.250 "digest": "sha384", 00:11:03.250 "dhgroup": "null" 00:11:03.250 } 00:11:03.250 } 00:11:03.250 ]' 00:11:03.250 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.509 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.509 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.509 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:03.509 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.509 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.509 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.509 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.769 21:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.338 21:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.597 21:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.597 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:04.597 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:04.597 00:11:04.856 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.856 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.856 21:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.856 { 00:11:04.856 "cntlid": 55, 00:11:04.856 "qid": 0, 00:11:04.856 "state": "enabled", 00:11:04.856 "thread": "nvmf_tgt_poll_group_000", 00:11:04.856 "listen_address": { 00:11:04.856 "trtype": "TCP", 00:11:04.856 "adrfam": "IPv4", 00:11:04.856 "traddr": "10.0.0.2", 00:11:04.856 "trsvcid": "4420" 00:11:04.856 }, 00:11:04.856 "peer_address": { 00:11:04.856 "trtype": "TCP", 00:11:04.856 "adrfam": "IPv4", 00:11:04.856 "traddr": "10.0.0.1", 00:11:04.856 "trsvcid": "36452" 00:11:04.856 }, 00:11:04.856 "auth": { 00:11:04.856 "state": "completed", 00:11:04.856 "digest": "sha384", 00:11:04.856 "dhgroup": "null" 00:11:04.856 } 00:11:04.856 } 00:11:04.856 ]' 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.856 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.114 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:05.114 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.114 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.114 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.114 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.372 21:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.938 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.939 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.197 00:11:06.197 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.197 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.197 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.456 { 00:11:06.456 "cntlid": 57, 00:11:06.456 "qid": 0, 00:11:06.456 "state": "enabled", 00:11:06.456 "thread": "nvmf_tgt_poll_group_000", 00:11:06.456 "listen_address": { 00:11:06.456 "trtype": "TCP", 00:11:06.456 "adrfam": "IPv4", 00:11:06.456 "traddr": "10.0.0.2", 00:11:06.456 "trsvcid": "4420" 00:11:06.456 }, 00:11:06.456 "peer_address": { 00:11:06.456 "trtype": "TCP", 00:11:06.456 "adrfam": "IPv4", 00:11:06.456 "traddr": "10.0.0.1", 00:11:06.456 "trsvcid": "36476" 00:11:06.456 }, 00:11:06.456 "auth": { 00:11:06.456 "state": "completed", 00:11:06.456 "digest": "sha384", 00:11:06.456 "dhgroup": "ffdhe2048" 00:11:06.456 } 00:11:06.456 } 00:11:06.456 ]' 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.456 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.715 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:06.715 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.715 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.715 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.715 21:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.715 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:07.281 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.539 21:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.797 00:11:07.797 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.797 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.797 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.055 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.055 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.055 21:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.055 21:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.056 21:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.056 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.056 { 00:11:08.056 "cntlid": 59, 00:11:08.056 "qid": 0, 00:11:08.056 "state": "enabled", 00:11:08.056 "thread": "nvmf_tgt_poll_group_000", 00:11:08.056 "listen_address": { 00:11:08.056 "trtype": "TCP", 00:11:08.056 "adrfam": "IPv4", 00:11:08.056 "traddr": "10.0.0.2", 00:11:08.056 "trsvcid": "4420" 00:11:08.056 }, 00:11:08.056 "peer_address": { 00:11:08.056 "trtype": "TCP", 00:11:08.056 "adrfam": "IPv4", 00:11:08.056 "traddr": "10.0.0.1", 00:11:08.056 "trsvcid": "36504" 00:11:08.056 }, 00:11:08.056 "auth": { 00:11:08.056 "state": "completed", 00:11:08.056 "digest": "sha384", 00:11:08.056 "dhgroup": "ffdhe2048" 00:11:08.056 } 00:11:08.056 } 00:11:08.056 ]' 00:11:08.056 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.056 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.056 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.056 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:08.056 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.315 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.315 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.315 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.315 21:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:08.885 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.145 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.405 00:11:09.405 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.405 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.405 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.664 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.664 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.664 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.664 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.664 21:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.664 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.664 { 00:11:09.664 "cntlid": 61, 00:11:09.664 "qid": 0, 00:11:09.664 "state": "enabled", 00:11:09.664 "thread": "nvmf_tgt_poll_group_000", 00:11:09.664 "listen_address": { 00:11:09.665 "trtype": "TCP", 00:11:09.665 "adrfam": "IPv4", 00:11:09.665 "traddr": "10.0.0.2", 00:11:09.665 "trsvcid": "4420" 00:11:09.665 }, 00:11:09.665 "peer_address": { 00:11:09.665 "trtype": "TCP", 00:11:09.665 "adrfam": "IPv4", 00:11:09.665 "traddr": "10.0.0.1", 00:11:09.665 "trsvcid": "36528" 00:11:09.665 }, 00:11:09.665 "auth": { 00:11:09.665 "state": "completed", 00:11:09.665 "digest": "sha384", 00:11:09.665 "dhgroup": "ffdhe2048" 00:11:09.665 } 00:11:09.665 } 00:11:09.665 ]' 00:11:09.665 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.665 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.665 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.665 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:09.665 21:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.924 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.924 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.924 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.924 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.492 21:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.751 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.752 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:11.010 00:11:11.010 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.010 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.010 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.269 { 00:11:11.269 "cntlid": 63, 00:11:11.269 "qid": 0, 00:11:11.269 "state": "enabled", 00:11:11.269 "thread": "nvmf_tgt_poll_group_000", 00:11:11.269 "listen_address": { 00:11:11.269 "trtype": "TCP", 00:11:11.269 "adrfam": "IPv4", 00:11:11.269 "traddr": "10.0.0.2", 00:11:11.269 "trsvcid": "4420" 00:11:11.269 }, 00:11:11.269 "peer_address": { 00:11:11.269 "trtype": "TCP", 00:11:11.269 "adrfam": "IPv4", 00:11:11.269 "traddr": "10.0.0.1", 00:11:11.269 "trsvcid": "36548" 00:11:11.269 }, 00:11:11.269 "auth": { 00:11:11.269 "state": "completed", 00:11:11.269 "digest": "sha384", 00:11:11.269 "dhgroup": "ffdhe2048" 00:11:11.269 } 00:11:11.269 } 00:11:11.269 ]' 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.269 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.528 21:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:12.097 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:12.356 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.357 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.615 00:11:12.615 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.615 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.615 21:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.875 { 00:11:12.875 "cntlid": 65, 00:11:12.875 "qid": 0, 00:11:12.875 "state": "enabled", 00:11:12.875 "thread": "nvmf_tgt_poll_group_000", 00:11:12.875 "listen_address": { 00:11:12.875 "trtype": "TCP", 00:11:12.875 "adrfam": "IPv4", 00:11:12.875 "traddr": "10.0.0.2", 00:11:12.875 "trsvcid": "4420" 00:11:12.875 }, 00:11:12.875 "peer_address": { 00:11:12.875 "trtype": "TCP", 00:11:12.875 "adrfam": "IPv4", 00:11:12.875 "traddr": "10.0.0.1", 00:11:12.875 "trsvcid": "59060" 00:11:12.875 }, 00:11:12.875 "auth": { 00:11:12.875 "state": "completed", 00:11:12.875 "digest": "sha384", 00:11:12.875 "dhgroup": "ffdhe3072" 00:11:12.875 } 00:11:12.875 } 00:11:12.875 ]' 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.875 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.134 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.701 21:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.959 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.217 00:11:14.217 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.217 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.218 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.477 { 00:11:14.477 "cntlid": 67, 00:11:14.477 "qid": 0, 00:11:14.477 "state": "enabled", 00:11:14.477 "thread": "nvmf_tgt_poll_group_000", 00:11:14.477 "listen_address": { 00:11:14.477 "trtype": "TCP", 00:11:14.477 "adrfam": "IPv4", 00:11:14.477 "traddr": "10.0.0.2", 00:11:14.477 "trsvcid": "4420" 00:11:14.477 }, 00:11:14.477 "peer_address": { 00:11:14.477 "trtype": "TCP", 00:11:14.477 "adrfam": "IPv4", 00:11:14.477 "traddr": "10.0.0.1", 00:11:14.477 "trsvcid": "59078" 00:11:14.477 }, 00:11:14.477 "auth": { 00:11:14.477 "state": "completed", 00:11:14.477 "digest": "sha384", 00:11:14.477 "dhgroup": "ffdhe3072" 00:11:14.477 } 00:11:14.477 } 00:11:14.477 ]' 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.477 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.736 21:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:15.306 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.565 21:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.825 00:11:15.825 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.825 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.825 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.085 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.085 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.085 21:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.085 21:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.085 21:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.085 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.086 { 00:11:16.086 "cntlid": 69, 00:11:16.086 "qid": 0, 00:11:16.086 "state": "enabled", 00:11:16.086 "thread": "nvmf_tgt_poll_group_000", 00:11:16.086 "listen_address": { 00:11:16.086 "trtype": "TCP", 00:11:16.086 "adrfam": "IPv4", 00:11:16.086 "traddr": "10.0.0.2", 00:11:16.086 "trsvcid": "4420" 00:11:16.086 }, 00:11:16.086 "peer_address": { 00:11:16.086 "trtype": "TCP", 00:11:16.086 "adrfam": "IPv4", 00:11:16.086 "traddr": "10.0.0.1", 00:11:16.086 "trsvcid": "59112" 00:11:16.086 }, 00:11:16.086 "auth": { 00:11:16.086 "state": "completed", 00:11:16.086 "digest": "sha384", 00:11:16.086 "dhgroup": "ffdhe3072" 00:11:16.086 } 00:11:16.086 } 00:11:16.086 ]' 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.086 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.345 21:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:16.913 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.171 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.429 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.429 { 00:11:17.429 "cntlid": 71, 00:11:17.429 "qid": 0, 00:11:17.429 "state": "enabled", 00:11:17.429 "thread": "nvmf_tgt_poll_group_000", 00:11:17.429 "listen_address": { 00:11:17.429 "trtype": "TCP", 00:11:17.429 "adrfam": "IPv4", 00:11:17.429 "traddr": "10.0.0.2", 00:11:17.429 "trsvcid": "4420" 00:11:17.429 }, 00:11:17.429 "peer_address": { 00:11:17.429 "trtype": "TCP", 00:11:17.429 "adrfam": "IPv4", 00:11:17.429 "traddr": "10.0.0.1", 00:11:17.429 "trsvcid": "59138" 00:11:17.429 }, 00:11:17.429 "auth": { 00:11:17.429 "state": "completed", 00:11:17.429 "digest": "sha384", 00:11:17.429 "dhgroup": "ffdhe3072" 00:11:17.429 } 00:11:17.429 } 00:11:17.429 ]' 00:11:17.429 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.687 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.687 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.687 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:17.687 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.687 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.687 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.687 21:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.945 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:18.512 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.512 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:18.512 21:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.512 21:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.512 21:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.513 21:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.771 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.030 { 00:11:19.030 "cntlid": 73, 00:11:19.030 "qid": 0, 00:11:19.030 "state": "enabled", 00:11:19.030 "thread": "nvmf_tgt_poll_group_000", 00:11:19.030 "listen_address": { 00:11:19.030 "trtype": "TCP", 00:11:19.030 "adrfam": "IPv4", 00:11:19.030 "traddr": "10.0.0.2", 00:11:19.030 "trsvcid": "4420" 00:11:19.030 }, 00:11:19.030 "peer_address": { 00:11:19.030 "trtype": "TCP", 00:11:19.030 "adrfam": "IPv4", 00:11:19.030 "traddr": "10.0.0.1", 00:11:19.030 "trsvcid": "59152" 00:11:19.030 }, 00:11:19.030 "auth": { 00:11:19.030 "state": "completed", 00:11:19.030 "digest": "sha384", 00:11:19.030 "dhgroup": "ffdhe4096" 00:11:19.030 } 00:11:19.030 } 00:11:19.030 ]' 00:11:19.030 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.289 21:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:19.856 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.115 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.373 00:11:20.373 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.373 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.373 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.632 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.632 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.632 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.632 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.632 21:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.632 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.632 { 00:11:20.632 "cntlid": 75, 00:11:20.632 "qid": 0, 00:11:20.632 "state": "enabled", 00:11:20.632 "thread": "nvmf_tgt_poll_group_000", 00:11:20.633 "listen_address": { 00:11:20.633 "trtype": "TCP", 00:11:20.633 "adrfam": "IPv4", 00:11:20.633 "traddr": "10.0.0.2", 00:11:20.633 "trsvcid": "4420" 00:11:20.633 }, 00:11:20.633 "peer_address": { 00:11:20.633 "trtype": "TCP", 00:11:20.633 "adrfam": "IPv4", 00:11:20.633 "traddr": "10.0.0.1", 00:11:20.633 "trsvcid": "59170" 00:11:20.633 }, 00:11:20.633 "auth": { 00:11:20.633 "state": "completed", 00:11:20.633 "digest": "sha384", 00:11:20.633 "dhgroup": "ffdhe4096" 00:11:20.633 } 00:11:20.633 } 00:11:20.633 ]' 00:11:20.633 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.633 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.633 21:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.891 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:20.891 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.891 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.891 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.891 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.891 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:21.459 21:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.718 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.977 00:11:21.977 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.977 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.977 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.236 { 00:11:22.236 "cntlid": 77, 00:11:22.236 "qid": 0, 00:11:22.236 "state": "enabled", 00:11:22.236 "thread": "nvmf_tgt_poll_group_000", 00:11:22.236 "listen_address": { 00:11:22.236 "trtype": "TCP", 00:11:22.236 "adrfam": "IPv4", 00:11:22.236 "traddr": "10.0.0.2", 00:11:22.236 "trsvcid": "4420" 00:11:22.236 }, 00:11:22.236 "peer_address": { 00:11:22.236 "trtype": "TCP", 00:11:22.236 "adrfam": "IPv4", 00:11:22.236 "traddr": "10.0.0.1", 00:11:22.236 "trsvcid": "59192" 00:11:22.236 }, 00:11:22.236 "auth": { 00:11:22.236 "state": "completed", 00:11:22.236 "digest": "sha384", 00:11:22.236 "dhgroup": "ffdhe4096" 00:11:22.236 } 00:11:22.236 } 00:11:22.236 ]' 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.236 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.496 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.496 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.496 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.496 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.496 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.762 21:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.331 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:23.591 00:11:23.591 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.591 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.591 21:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.850 { 00:11:23.850 "cntlid": 79, 00:11:23.850 "qid": 0, 00:11:23.850 "state": "enabled", 00:11:23.850 "thread": "nvmf_tgt_poll_group_000", 00:11:23.850 "listen_address": { 00:11:23.850 "trtype": "TCP", 00:11:23.850 "adrfam": "IPv4", 00:11:23.850 "traddr": "10.0.0.2", 00:11:23.850 "trsvcid": "4420" 00:11:23.850 }, 00:11:23.850 "peer_address": { 00:11:23.850 "trtype": "TCP", 00:11:23.850 "adrfam": "IPv4", 00:11:23.850 "traddr": "10.0.0.1", 00:11:23.850 "trsvcid": "35410" 00:11:23.850 }, 00:11:23.850 "auth": { 00:11:23.850 "state": "completed", 00:11:23.850 "digest": "sha384", 00:11:23.850 "dhgroup": "ffdhe4096" 00:11:23.850 } 00:11:23.850 } 00:11:23.850 ]' 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.850 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.109 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:24.109 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.109 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.109 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.109 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.368 21:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.937 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.504 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.504 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.504 { 00:11:25.504 "cntlid": 81, 00:11:25.504 "qid": 0, 00:11:25.504 "state": "enabled", 00:11:25.504 "thread": "nvmf_tgt_poll_group_000", 00:11:25.504 "listen_address": { 00:11:25.504 "trtype": "TCP", 00:11:25.504 "adrfam": "IPv4", 00:11:25.504 "traddr": "10.0.0.2", 00:11:25.505 "trsvcid": "4420" 00:11:25.505 }, 00:11:25.505 "peer_address": { 00:11:25.505 "trtype": "TCP", 00:11:25.505 "adrfam": "IPv4", 00:11:25.505 "traddr": "10.0.0.1", 00:11:25.505 "trsvcid": "35434" 00:11:25.505 }, 00:11:25.505 "auth": { 00:11:25.505 "state": "completed", 00:11:25.505 "digest": "sha384", 00:11:25.505 "dhgroup": "ffdhe6144" 00:11:25.505 } 00:11:25.505 } 00:11:25.505 ]' 00:11:25.505 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.763 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.763 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.763 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.763 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.763 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.763 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.763 21:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.021 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.589 21:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.848 21:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.848 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.848 21:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.107 00:11:27.107 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.107 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.107 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.365 { 00:11:27.365 "cntlid": 83, 00:11:27.365 "qid": 0, 00:11:27.365 "state": "enabled", 00:11:27.365 "thread": "nvmf_tgt_poll_group_000", 00:11:27.365 "listen_address": { 00:11:27.365 "trtype": "TCP", 00:11:27.365 "adrfam": "IPv4", 00:11:27.365 "traddr": "10.0.0.2", 00:11:27.365 "trsvcid": "4420" 00:11:27.365 }, 00:11:27.365 "peer_address": { 00:11:27.365 "trtype": "TCP", 00:11:27.365 "adrfam": "IPv4", 00:11:27.365 "traddr": "10.0.0.1", 00:11:27.365 "trsvcid": "35452" 00:11:27.365 }, 00:11:27.365 "auth": { 00:11:27.365 "state": "completed", 00:11:27.365 "digest": "sha384", 00:11:27.365 "dhgroup": "ffdhe6144" 00:11:27.365 } 00:11:27.365 } 00:11:27.365 ]' 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.365 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.624 21:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:28.191 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.449 21:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.707 00:11:28.707 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.707 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.707 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.966 { 00:11:28.966 "cntlid": 85, 00:11:28.966 "qid": 0, 00:11:28.966 "state": "enabled", 00:11:28.966 "thread": "nvmf_tgt_poll_group_000", 00:11:28.966 "listen_address": { 00:11:28.966 "trtype": "TCP", 00:11:28.966 "adrfam": "IPv4", 00:11:28.966 "traddr": "10.0.0.2", 00:11:28.966 "trsvcid": "4420" 00:11:28.966 }, 00:11:28.966 "peer_address": { 00:11:28.966 "trtype": "TCP", 00:11:28.966 "adrfam": "IPv4", 00:11:28.966 "traddr": "10.0.0.1", 00:11:28.966 "trsvcid": "35484" 00:11:28.966 }, 00:11:28.966 "auth": { 00:11:28.966 "state": "completed", 00:11:28.966 "digest": "sha384", 00:11:28.966 "dhgroup": "ffdhe6144" 00:11:28.966 } 00:11:28.966 } 00:11:28.966 ]' 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.966 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:28.967 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.226 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.226 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.226 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.226 21:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:29.795 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:30.093 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:30.369 00:11:30.369 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.369 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.369 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.628 { 00:11:30.628 "cntlid": 87, 00:11:30.628 "qid": 0, 00:11:30.628 "state": "enabled", 00:11:30.628 "thread": "nvmf_tgt_poll_group_000", 00:11:30.628 "listen_address": { 00:11:30.628 "trtype": "TCP", 00:11:30.628 "adrfam": "IPv4", 00:11:30.628 "traddr": "10.0.0.2", 00:11:30.628 "trsvcid": "4420" 00:11:30.628 }, 00:11:30.628 "peer_address": { 00:11:30.628 "trtype": "TCP", 00:11:30.628 "adrfam": "IPv4", 00:11:30.628 "traddr": "10.0.0.1", 00:11:30.628 "trsvcid": "35508" 00:11:30.628 }, 00:11:30.628 "auth": { 00:11:30.628 "state": "completed", 00:11:30.628 "digest": "sha384", 00:11:30.628 "dhgroup": "ffdhe6144" 00:11:30.628 } 00:11:30.628 } 00:11:30.628 ]' 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:30.628 21:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.887 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.888 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.888 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.888 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:31.471 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.731 21:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.300 00:11:32.300 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.300 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.300 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.558 { 00:11:32.558 "cntlid": 89, 00:11:32.558 "qid": 0, 00:11:32.558 "state": "enabled", 00:11:32.558 "thread": "nvmf_tgt_poll_group_000", 00:11:32.558 "listen_address": { 00:11:32.558 "trtype": "TCP", 00:11:32.558 "adrfam": "IPv4", 00:11:32.558 "traddr": "10.0.0.2", 00:11:32.558 "trsvcid": "4420" 00:11:32.558 }, 00:11:32.558 "peer_address": { 00:11:32.558 "trtype": "TCP", 00:11:32.558 "adrfam": "IPv4", 00:11:32.558 "traddr": "10.0.0.1", 00:11:32.558 "trsvcid": "35542" 00:11:32.558 }, 00:11:32.558 "auth": { 00:11:32.558 "state": "completed", 00:11:32.558 "digest": "sha384", 00:11:32.558 "dhgroup": "ffdhe8192" 00:11:32.558 } 00:11:32.558 } 00:11:32.558 ]' 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.558 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.559 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.559 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.817 21:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.386 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.646 21:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.214 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.214 { 00:11:34.214 "cntlid": 91, 00:11:34.214 "qid": 0, 00:11:34.214 "state": "enabled", 00:11:34.214 "thread": "nvmf_tgt_poll_group_000", 00:11:34.214 "listen_address": { 00:11:34.214 "trtype": "TCP", 00:11:34.214 "adrfam": "IPv4", 00:11:34.214 "traddr": "10.0.0.2", 00:11:34.214 "trsvcid": "4420" 00:11:34.214 }, 00:11:34.214 "peer_address": { 00:11:34.214 "trtype": "TCP", 00:11:34.214 "adrfam": "IPv4", 00:11:34.214 "traddr": "10.0.0.1", 00:11:34.214 "trsvcid": "39588" 00:11:34.214 }, 00:11:34.214 "auth": { 00:11:34.214 "state": "completed", 00:11:34.214 "digest": "sha384", 00:11:34.214 "dhgroup": "ffdhe8192" 00:11:34.214 } 00:11:34.214 } 00:11:34.214 ]' 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.214 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.473 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.473 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.473 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.473 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.473 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.473 21:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.042 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.301 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:35.301 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.301 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:35.301 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:35.301 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:35.301 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.302 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.302 21:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.302 21:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.302 21:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.302 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.302 21:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.870 00:11:35.870 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.870 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.870 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.129 { 00:11:36.129 "cntlid": 93, 00:11:36.129 "qid": 0, 00:11:36.129 "state": "enabled", 00:11:36.129 "thread": "nvmf_tgt_poll_group_000", 00:11:36.129 "listen_address": { 00:11:36.129 "trtype": "TCP", 00:11:36.129 "adrfam": "IPv4", 00:11:36.129 "traddr": "10.0.0.2", 00:11:36.129 "trsvcid": "4420" 00:11:36.129 }, 00:11:36.129 "peer_address": { 00:11:36.129 "trtype": "TCP", 00:11:36.129 "adrfam": "IPv4", 00:11:36.129 "traddr": "10.0.0.1", 00:11:36.129 "trsvcid": "39616" 00:11:36.129 }, 00:11:36.129 "auth": { 00:11:36.129 "state": "completed", 00:11:36.129 "digest": "sha384", 00:11:36.129 "dhgroup": "ffdhe8192" 00:11:36.129 } 00:11:36.129 } 00:11:36.129 ]' 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.129 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.400 21:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:36.999 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:37.257 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:37.257 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.257 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:37.257 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:37.257 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:37.258 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.258 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:37.258 21:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.258 21:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.258 21:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.258 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:37.258 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:37.826 00:11:37.826 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.826 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.826 21:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.085 { 00:11:38.085 "cntlid": 95, 00:11:38.085 "qid": 0, 00:11:38.085 "state": "enabled", 00:11:38.085 "thread": "nvmf_tgt_poll_group_000", 00:11:38.085 "listen_address": { 00:11:38.085 "trtype": "TCP", 00:11:38.085 "adrfam": "IPv4", 00:11:38.085 "traddr": "10.0.0.2", 00:11:38.085 "trsvcid": "4420" 00:11:38.085 }, 00:11:38.085 "peer_address": { 00:11:38.085 "trtype": "TCP", 00:11:38.085 "adrfam": "IPv4", 00:11:38.085 "traddr": "10.0.0.1", 00:11:38.085 "trsvcid": "39644" 00:11:38.085 }, 00:11:38.085 "auth": { 00:11:38.085 "state": "completed", 00:11:38.085 "digest": "sha384", 00:11:38.085 "dhgroup": "ffdhe8192" 00:11:38.085 } 00:11:38.085 } 00:11:38.085 ]' 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.085 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.344 21:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:38.912 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.171 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.430 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.430 { 00:11:39.430 "cntlid": 97, 00:11:39.430 "qid": 0, 00:11:39.430 "state": "enabled", 00:11:39.430 "thread": "nvmf_tgt_poll_group_000", 00:11:39.430 "listen_address": { 00:11:39.430 "trtype": "TCP", 00:11:39.430 "adrfam": "IPv4", 00:11:39.430 "traddr": "10.0.0.2", 00:11:39.430 "trsvcid": "4420" 00:11:39.430 }, 00:11:39.430 "peer_address": { 00:11:39.430 "trtype": "TCP", 00:11:39.430 "adrfam": "IPv4", 00:11:39.430 "traddr": "10.0.0.1", 00:11:39.430 "trsvcid": "39660" 00:11:39.430 }, 00:11:39.430 "auth": { 00:11:39.430 "state": "completed", 00:11:39.430 "digest": "sha512", 00:11:39.430 "dhgroup": "null" 00:11:39.430 } 00:11:39.430 } 00:11:39.430 ]' 00:11:39.430 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.702 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.702 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.702 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:39.702 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.702 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.703 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.703 21:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.961 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:40.526 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.526 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:40.526 21:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.527 21:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.785 00:11:40.785 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.785 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.785 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.044 { 00:11:41.044 "cntlid": 99, 00:11:41.044 "qid": 0, 00:11:41.044 "state": "enabled", 00:11:41.044 "thread": "nvmf_tgt_poll_group_000", 00:11:41.044 "listen_address": { 00:11:41.044 "trtype": "TCP", 00:11:41.044 "adrfam": "IPv4", 00:11:41.044 "traddr": "10.0.0.2", 00:11:41.044 "trsvcid": "4420" 00:11:41.044 }, 00:11:41.044 "peer_address": { 00:11:41.044 "trtype": "TCP", 00:11:41.044 "adrfam": "IPv4", 00:11:41.044 "traddr": "10.0.0.1", 00:11:41.044 "trsvcid": "39684" 00:11:41.044 }, 00:11:41.044 "auth": { 00:11:41.044 "state": "completed", 00:11:41.044 "digest": "sha512", 00:11:41.044 "dhgroup": "null" 00:11:41.044 } 00:11:41.044 } 00:11:41.044 ]' 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.044 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:41.303 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.303 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.303 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.303 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.303 21:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.870 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.130 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.389 00:11:42.389 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.389 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.389 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.648 { 00:11:42.648 "cntlid": 101, 00:11:42.648 "qid": 0, 00:11:42.648 "state": "enabled", 00:11:42.648 "thread": "nvmf_tgt_poll_group_000", 00:11:42.648 "listen_address": { 00:11:42.648 "trtype": "TCP", 00:11:42.648 "adrfam": "IPv4", 00:11:42.648 "traddr": "10.0.0.2", 00:11:42.648 "trsvcid": "4420" 00:11:42.648 }, 00:11:42.648 "peer_address": { 00:11:42.648 "trtype": "TCP", 00:11:42.648 "adrfam": "IPv4", 00:11:42.648 "traddr": "10.0.0.1", 00:11:42.648 "trsvcid": "43198" 00:11:42.648 }, 00:11:42.648 "auth": { 00:11:42.648 "state": "completed", 00:11:42.648 "digest": "sha512", 00:11:42.648 "dhgroup": "null" 00:11:42.648 } 00:11:42.648 } 00:11:42.648 ]' 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:42.648 21:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.907 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.907 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.907 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.907 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:43.508 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:43.767 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:43.767 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.767 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:43.767 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:43.767 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:43.767 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.767 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:43.768 21:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.768 21:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.768 21:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.768 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:43.768 21:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.027 00:11:44.027 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.027 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.027 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.286 { 00:11:44.286 "cntlid": 103, 00:11:44.286 "qid": 0, 00:11:44.286 "state": "enabled", 00:11:44.286 "thread": "nvmf_tgt_poll_group_000", 00:11:44.286 "listen_address": { 00:11:44.286 "trtype": "TCP", 00:11:44.286 "adrfam": "IPv4", 00:11:44.286 "traddr": "10.0.0.2", 00:11:44.286 "trsvcid": "4420" 00:11:44.286 }, 00:11:44.286 "peer_address": { 00:11:44.286 "trtype": "TCP", 00:11:44.286 "adrfam": "IPv4", 00:11:44.286 "traddr": "10.0.0.1", 00:11:44.286 "trsvcid": "43234" 00:11:44.286 }, 00:11:44.286 "auth": { 00:11:44.286 "state": "completed", 00:11:44.286 "digest": "sha512", 00:11:44.286 "dhgroup": "null" 00:11:44.286 } 00:11:44.286 } 00:11:44.286 ]' 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:44.286 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.287 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.287 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.287 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.546 21:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.114 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.374 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.634 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.634 { 00:11:45.634 "cntlid": 105, 00:11:45.634 "qid": 0, 00:11:45.634 "state": "enabled", 00:11:45.634 "thread": "nvmf_tgt_poll_group_000", 00:11:45.634 "listen_address": { 00:11:45.634 "trtype": "TCP", 00:11:45.634 "adrfam": "IPv4", 00:11:45.634 "traddr": "10.0.0.2", 00:11:45.634 "trsvcid": "4420" 00:11:45.634 }, 00:11:45.634 "peer_address": { 00:11:45.634 "trtype": "TCP", 00:11:45.634 "adrfam": "IPv4", 00:11:45.634 "traddr": "10.0.0.1", 00:11:45.634 "trsvcid": "43260" 00:11:45.634 }, 00:11:45.634 "auth": { 00:11:45.634 "state": "completed", 00:11:45.634 "digest": "sha512", 00:11:45.634 "dhgroup": "ffdhe2048" 00:11:45.634 } 00:11:45.634 } 00:11:45.634 ]' 00:11:45.634 21:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.893 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.893 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.893 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:45.893 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.893 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.893 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.893 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.152 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.720 21:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.720 21:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.721 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.721 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.980 00:11:46.980 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.980 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.980 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.239 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.239 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.239 21:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.239 21:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.239 21:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.239 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.239 { 00:11:47.239 "cntlid": 107, 00:11:47.239 "qid": 0, 00:11:47.239 "state": "enabled", 00:11:47.239 "thread": "nvmf_tgt_poll_group_000", 00:11:47.239 "listen_address": { 00:11:47.239 "trtype": "TCP", 00:11:47.239 "adrfam": "IPv4", 00:11:47.239 "traddr": "10.0.0.2", 00:11:47.239 "trsvcid": "4420" 00:11:47.239 }, 00:11:47.239 "peer_address": { 00:11:47.239 "trtype": "TCP", 00:11:47.239 "adrfam": "IPv4", 00:11:47.239 "traddr": "10.0.0.1", 00:11:47.240 "trsvcid": "43296" 00:11:47.240 }, 00:11:47.240 "auth": { 00:11:47.240 "state": "completed", 00:11:47.240 "digest": "sha512", 00:11:47.240 "dhgroup": "ffdhe2048" 00:11:47.240 } 00:11:47.240 } 00:11:47.240 ]' 00:11:47.240 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.240 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.240 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.498 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.498 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.498 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.498 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.498 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.757 21:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.326 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.585 00:11:48.585 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.585 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.585 21:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.845 { 00:11:48.845 "cntlid": 109, 00:11:48.845 "qid": 0, 00:11:48.845 "state": "enabled", 00:11:48.845 "thread": "nvmf_tgt_poll_group_000", 00:11:48.845 "listen_address": { 00:11:48.845 "trtype": "TCP", 00:11:48.845 "adrfam": "IPv4", 00:11:48.845 "traddr": "10.0.0.2", 00:11:48.845 "trsvcid": "4420" 00:11:48.845 }, 00:11:48.845 "peer_address": { 00:11:48.845 "trtype": "TCP", 00:11:48.845 "adrfam": "IPv4", 00:11:48.845 "traddr": "10.0.0.1", 00:11:48.845 "trsvcid": "43320" 00:11:48.845 }, 00:11:48.845 "auth": { 00:11:48.845 "state": "completed", 00:11:48.845 "digest": "sha512", 00:11:48.845 "dhgroup": "ffdhe2048" 00:11:48.845 } 00:11:48.845 } 00:11:48.845 ]' 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.845 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.104 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.104 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.104 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.104 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:49.675 21:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.675 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:49.675 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.675 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.675 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.675 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.675 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.675 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.935 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:50.193 00:11:50.193 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.193 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.193 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.451 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.451 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.451 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.451 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.451 21:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.451 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.451 { 00:11:50.451 "cntlid": 111, 00:11:50.451 "qid": 0, 00:11:50.452 "state": "enabled", 00:11:50.452 "thread": "nvmf_tgt_poll_group_000", 00:11:50.452 "listen_address": { 00:11:50.452 "trtype": "TCP", 00:11:50.452 "adrfam": "IPv4", 00:11:50.452 "traddr": "10.0.0.2", 00:11:50.452 "trsvcid": "4420" 00:11:50.452 }, 00:11:50.452 "peer_address": { 00:11:50.452 "trtype": "TCP", 00:11:50.452 "adrfam": "IPv4", 00:11:50.452 "traddr": "10.0.0.1", 00:11:50.452 "trsvcid": "43348" 00:11:50.452 }, 00:11:50.452 "auth": { 00:11:50.452 "state": "completed", 00:11:50.452 "digest": "sha512", 00:11:50.452 "dhgroup": "ffdhe2048" 00:11:50.452 } 00:11:50.452 } 00:11:50.452 ]' 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.452 21:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.710 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:51.275 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.276 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:51.533 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:11:51.533 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.533 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:51.533 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:51.533 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:51.534 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.534 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.534 21:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.534 21:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.534 21:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.534 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.534 21:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.791 00:11:51.791 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.791 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.791 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.049 { 00:11:52.049 "cntlid": 113, 00:11:52.049 "qid": 0, 00:11:52.049 "state": "enabled", 00:11:52.049 "thread": "nvmf_tgt_poll_group_000", 00:11:52.049 "listen_address": { 00:11:52.049 "trtype": "TCP", 00:11:52.049 "adrfam": "IPv4", 00:11:52.049 "traddr": "10.0.0.2", 00:11:52.049 "trsvcid": "4420" 00:11:52.049 }, 00:11:52.049 "peer_address": { 00:11:52.049 "trtype": "TCP", 00:11:52.049 "adrfam": "IPv4", 00:11:52.049 "traddr": "10.0.0.1", 00:11:52.049 "trsvcid": "43372" 00:11:52.049 }, 00:11:52.049 "auth": { 00:11:52.049 "state": "completed", 00:11:52.049 "digest": "sha512", 00:11:52.049 "dhgroup": "ffdhe3072" 00:11:52.049 } 00:11:52.049 } 00:11:52.049 ]' 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.049 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.307 21:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:52.873 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.134 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.393 00:11:53.393 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.393 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.393 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.651 { 00:11:53.651 "cntlid": 115, 00:11:53.651 "qid": 0, 00:11:53.651 "state": "enabled", 00:11:53.651 "thread": "nvmf_tgt_poll_group_000", 00:11:53.651 "listen_address": { 00:11:53.651 "trtype": "TCP", 00:11:53.651 "adrfam": "IPv4", 00:11:53.651 "traddr": "10.0.0.2", 00:11:53.651 "trsvcid": "4420" 00:11:53.651 }, 00:11:53.651 "peer_address": { 00:11:53.651 "trtype": "TCP", 00:11:53.651 "adrfam": "IPv4", 00:11:53.651 "traddr": "10.0.0.1", 00:11:53.651 "trsvcid": "38858" 00:11:53.651 }, 00:11:53.651 "auth": { 00:11:53.651 "state": "completed", 00:11:53.651 "digest": "sha512", 00:11:53.651 "dhgroup": "ffdhe3072" 00:11:53.651 } 00:11:53.651 } 00:11:53.651 ]' 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:53.651 21:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.651 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.651 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.651 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.910 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.478 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.736 21:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.995 00:11:54.995 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.996 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.996 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.255 { 00:11:55.255 "cntlid": 117, 00:11:55.255 "qid": 0, 00:11:55.255 "state": "enabled", 00:11:55.255 "thread": "nvmf_tgt_poll_group_000", 00:11:55.255 "listen_address": { 00:11:55.255 "trtype": "TCP", 00:11:55.255 "adrfam": "IPv4", 00:11:55.255 "traddr": "10.0.0.2", 00:11:55.255 "trsvcid": "4420" 00:11:55.255 }, 00:11:55.255 "peer_address": { 00:11:55.255 "trtype": "TCP", 00:11:55.255 "adrfam": "IPv4", 00:11:55.255 "traddr": "10.0.0.1", 00:11:55.255 "trsvcid": "38880" 00:11:55.255 }, 00:11:55.255 "auth": { 00:11:55.255 "state": "completed", 00:11:55.255 "digest": "sha512", 00:11:55.255 "dhgroup": "ffdhe3072" 00:11:55.255 } 00:11:55.255 } 00:11:55.255 ]' 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.255 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.514 21:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.081 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:56.340 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.341 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.599 00:11:56.599 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.599 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.599 21:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.859 { 00:11:56.859 "cntlid": 119, 00:11:56.859 "qid": 0, 00:11:56.859 "state": "enabled", 00:11:56.859 "thread": "nvmf_tgt_poll_group_000", 00:11:56.859 "listen_address": { 00:11:56.859 "trtype": "TCP", 00:11:56.859 "adrfam": "IPv4", 00:11:56.859 "traddr": "10.0.0.2", 00:11:56.859 "trsvcid": "4420" 00:11:56.859 }, 00:11:56.859 "peer_address": { 00:11:56.859 "trtype": "TCP", 00:11:56.859 "adrfam": "IPv4", 00:11:56.859 "traddr": "10.0.0.1", 00:11:56.859 "trsvcid": "38900" 00:11:56.859 }, 00:11:56.859 "auth": { 00:11:56.859 "state": "completed", 00:11:56.859 "digest": "sha512", 00:11:56.859 "dhgroup": "ffdhe3072" 00:11:56.859 } 00:11:56.859 } 00:11:56.859 ]' 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.859 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.118 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:11:57.686 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:57.687 21:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.946 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.205 00:11:58.205 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.205 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.205 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.465 { 00:11:58.465 "cntlid": 121, 00:11:58.465 "qid": 0, 00:11:58.465 "state": "enabled", 00:11:58.465 "thread": "nvmf_tgt_poll_group_000", 00:11:58.465 "listen_address": { 00:11:58.465 "trtype": "TCP", 00:11:58.465 "adrfam": "IPv4", 00:11:58.465 "traddr": "10.0.0.2", 00:11:58.465 "trsvcid": "4420" 00:11:58.465 }, 00:11:58.465 "peer_address": { 00:11:58.465 "trtype": "TCP", 00:11:58.465 "adrfam": "IPv4", 00:11:58.465 "traddr": "10.0.0.1", 00:11:58.465 "trsvcid": "38934" 00:11:58.465 }, 00:11:58.465 "auth": { 00:11:58.465 "state": "completed", 00:11:58.465 "digest": "sha512", 00:11:58.465 "dhgroup": "ffdhe4096" 00:11:58.465 } 00:11:58.465 } 00:11:58.465 ]' 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.465 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.724 21:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:59.292 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:59.551 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.552 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.811 00:11:59.811 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.811 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.811 21:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.068 { 00:12:00.068 "cntlid": 123, 00:12:00.068 "qid": 0, 00:12:00.068 "state": "enabled", 00:12:00.068 "thread": "nvmf_tgt_poll_group_000", 00:12:00.068 "listen_address": { 00:12:00.068 "trtype": "TCP", 00:12:00.068 "adrfam": "IPv4", 00:12:00.068 "traddr": "10.0.0.2", 00:12:00.068 "trsvcid": "4420" 00:12:00.068 }, 00:12:00.068 "peer_address": { 00:12:00.068 "trtype": "TCP", 00:12:00.068 "adrfam": "IPv4", 00:12:00.068 "traddr": "10.0.0.1", 00:12:00.068 "trsvcid": "38968" 00:12:00.068 }, 00:12:00.068 "auth": { 00:12:00.068 "state": "completed", 00:12:00.068 "digest": "sha512", 00:12:00.068 "dhgroup": "ffdhe4096" 00:12:00.068 } 00:12:00.068 } 00:12:00.068 ]' 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.068 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.326 21:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:12:00.893 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.894 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:00.894 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.894 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.894 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.894 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.894 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:00.894 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.152 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.455 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.455 { 00:12:01.455 "cntlid": 125, 00:12:01.455 "qid": 0, 00:12:01.455 "state": "enabled", 00:12:01.455 "thread": "nvmf_tgt_poll_group_000", 00:12:01.455 "listen_address": { 00:12:01.455 "trtype": "TCP", 00:12:01.455 "adrfam": "IPv4", 00:12:01.455 "traddr": "10.0.0.2", 00:12:01.455 "trsvcid": "4420" 00:12:01.455 }, 00:12:01.455 "peer_address": { 00:12:01.455 "trtype": "TCP", 00:12:01.455 "adrfam": "IPv4", 00:12:01.455 "traddr": "10.0.0.1", 00:12:01.455 "trsvcid": "38988" 00:12:01.455 }, 00:12:01.455 "auth": { 00:12:01.455 "state": "completed", 00:12:01.455 "digest": "sha512", 00:12:01.455 "dhgroup": "ffdhe4096" 00:12:01.455 } 00:12:01.455 } 00:12:01.455 ]' 00:12:01.455 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.715 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:01.715 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.715 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.715 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.715 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.715 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.715 21:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.974 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.541 21:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.799 00:12:02.799 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.799 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.799 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.058 { 00:12:03.058 "cntlid": 127, 00:12:03.058 "qid": 0, 00:12:03.058 "state": "enabled", 00:12:03.058 "thread": "nvmf_tgt_poll_group_000", 00:12:03.058 "listen_address": { 00:12:03.058 "trtype": "TCP", 00:12:03.058 "adrfam": "IPv4", 00:12:03.058 "traddr": "10.0.0.2", 00:12:03.058 "trsvcid": "4420" 00:12:03.058 }, 00:12:03.058 "peer_address": { 00:12:03.058 "trtype": "TCP", 00:12:03.058 "adrfam": "IPv4", 00:12:03.058 "traddr": "10.0.0.1", 00:12:03.058 "trsvcid": "38474" 00:12:03.058 }, 00:12:03.058 "auth": { 00:12:03.058 "state": "completed", 00:12:03.058 "digest": "sha512", 00:12:03.058 "dhgroup": "ffdhe4096" 00:12:03.058 } 00:12:03.058 } 00:12:03.058 ]' 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.058 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.317 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.317 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.317 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.317 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.317 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.317 21:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:03.886 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.146 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.405 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.664 { 00:12:04.664 "cntlid": 129, 00:12:04.664 "qid": 0, 00:12:04.664 "state": "enabled", 00:12:04.664 "thread": "nvmf_tgt_poll_group_000", 00:12:04.664 "listen_address": { 00:12:04.664 "trtype": "TCP", 00:12:04.664 "adrfam": "IPv4", 00:12:04.664 "traddr": "10.0.0.2", 00:12:04.664 "trsvcid": "4420" 00:12:04.664 }, 00:12:04.664 "peer_address": { 00:12:04.664 "trtype": "TCP", 00:12:04.664 "adrfam": "IPv4", 00:12:04.664 "traddr": "10.0.0.1", 00:12:04.664 "trsvcid": "38516" 00:12:04.664 }, 00:12:04.664 "auth": { 00:12:04.664 "state": "completed", 00:12:04.664 "digest": "sha512", 00:12:04.664 "dhgroup": "ffdhe6144" 00:12:04.664 } 00:12:04.664 } 00:12:04.664 ]' 00:12:04.664 21:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.923 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.923 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.923 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:04.923 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.923 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.923 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.923 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.181 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:05.750 21:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.750 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.317 00:12:06.317 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.317 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.317 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.317 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.318 { 00:12:06.318 "cntlid": 131, 00:12:06.318 "qid": 0, 00:12:06.318 "state": "enabled", 00:12:06.318 "thread": "nvmf_tgt_poll_group_000", 00:12:06.318 "listen_address": { 00:12:06.318 "trtype": "TCP", 00:12:06.318 "adrfam": "IPv4", 00:12:06.318 "traddr": "10.0.0.2", 00:12:06.318 "trsvcid": "4420" 00:12:06.318 }, 00:12:06.318 "peer_address": { 00:12:06.318 "trtype": "TCP", 00:12:06.318 "adrfam": "IPv4", 00:12:06.318 "traddr": "10.0.0.1", 00:12:06.318 "trsvcid": "38538" 00:12:06.318 }, 00:12:06.318 "auth": { 00:12:06.318 "state": "completed", 00:12:06.318 "digest": "sha512", 00:12:06.318 "dhgroup": "ffdhe6144" 00:12:06.318 } 00:12:06.318 } 00:12:06.318 ]' 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.318 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.575 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:06.575 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.575 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.575 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.575 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.833 21:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:07.399 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.400 21:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.968 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.968 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.968 { 00:12:07.968 "cntlid": 133, 00:12:07.968 "qid": 0, 00:12:07.968 "state": "enabled", 00:12:07.968 "thread": "nvmf_tgt_poll_group_000", 00:12:07.968 "listen_address": { 00:12:07.968 "trtype": "TCP", 00:12:07.968 "adrfam": "IPv4", 00:12:07.968 "traddr": "10.0.0.2", 00:12:07.968 "trsvcid": "4420" 00:12:07.968 }, 00:12:07.968 "peer_address": { 00:12:07.968 "trtype": "TCP", 00:12:07.968 "adrfam": "IPv4", 00:12:07.968 "traddr": "10.0.0.1", 00:12:07.968 "trsvcid": "38570" 00:12:07.968 }, 00:12:07.968 "auth": { 00:12:07.968 "state": "completed", 00:12:07.968 "digest": "sha512", 00:12:07.968 "dhgroup": "ffdhe6144" 00:12:07.968 } 00:12:07.968 } 00:12:07.968 ]' 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.227 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.486 21:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.054 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.622 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.622 { 00:12:09.622 "cntlid": 135, 00:12:09.622 "qid": 0, 00:12:09.622 "state": "enabled", 00:12:09.622 "thread": "nvmf_tgt_poll_group_000", 00:12:09.622 "listen_address": { 00:12:09.622 "trtype": "TCP", 00:12:09.622 "adrfam": "IPv4", 00:12:09.622 "traddr": "10.0.0.2", 00:12:09.622 "trsvcid": "4420" 00:12:09.622 }, 00:12:09.622 "peer_address": { 00:12:09.622 "trtype": "TCP", 00:12:09.622 "adrfam": "IPv4", 00:12:09.622 "traddr": "10.0.0.1", 00:12:09.622 "trsvcid": "38610" 00:12:09.622 }, 00:12:09.622 "auth": { 00:12:09.622 "state": "completed", 00:12:09.622 "digest": "sha512", 00:12:09.622 "dhgroup": "ffdhe6144" 00:12:09.622 } 00:12:09.622 } 00:12:09.622 ]' 00:12:09.622 21:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.881 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.881 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.881 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:09.881 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.881 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.881 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.881 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.139 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:10.705 21:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.705 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.273 00:12:11.273 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.273 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.273 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.531 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.531 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.531 21:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.531 21:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.531 21:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.531 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.531 { 00:12:11.531 "cntlid": 137, 00:12:11.531 "qid": 0, 00:12:11.531 "state": "enabled", 00:12:11.531 "thread": "nvmf_tgt_poll_group_000", 00:12:11.531 "listen_address": { 00:12:11.532 "trtype": "TCP", 00:12:11.532 "adrfam": "IPv4", 00:12:11.532 "traddr": "10.0.0.2", 00:12:11.532 "trsvcid": "4420" 00:12:11.532 }, 00:12:11.532 "peer_address": { 00:12:11.532 "trtype": "TCP", 00:12:11.532 "adrfam": "IPv4", 00:12:11.532 "traddr": "10.0.0.1", 00:12:11.532 "trsvcid": "38644" 00:12:11.532 }, 00:12:11.532 "auth": { 00:12:11.532 "state": "completed", 00:12:11.532 "digest": "sha512", 00:12:11.532 "dhgroup": "ffdhe8192" 00:12:11.532 } 00:12:11.532 } 00:12:11.532 ]' 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.532 21:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.790 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:12.357 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.615 21:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.183 00:12:13.183 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.183 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.183 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.183 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.183 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.183 21:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.183 21:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.442 { 00:12:13.442 "cntlid": 139, 00:12:13.442 "qid": 0, 00:12:13.442 "state": "enabled", 00:12:13.442 "thread": "nvmf_tgt_poll_group_000", 00:12:13.442 "listen_address": { 00:12:13.442 "trtype": "TCP", 00:12:13.442 "adrfam": "IPv4", 00:12:13.442 "traddr": "10.0.0.2", 00:12:13.442 "trsvcid": "4420" 00:12:13.442 }, 00:12:13.442 "peer_address": { 00:12:13.442 "trtype": "TCP", 00:12:13.442 "adrfam": "IPv4", 00:12:13.442 "traddr": "10.0.0.1", 00:12:13.442 "trsvcid": "59008" 00:12:13.442 }, 00:12:13.442 "auth": { 00:12:13.442 "state": "completed", 00:12:13.442 "digest": "sha512", 00:12:13.442 "dhgroup": "ffdhe8192" 00:12:13.442 } 00:12:13.442 } 00:12:13.442 ]' 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.442 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.700 21:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:01:MzNmZGY3NGU1NzVlNjg2N2E1YTFiM2MwNmY2MWExNDKG4tLy: --dhchap-ctrl-secret DHHC-1:02:MWZkMzc4ZmQ1ZWQ0ODVlZWI4NDc0MzE3ZDc4MTFhYjk4NWRjMGU4Nzc1ZjY3OTNiD8BjSQ==: 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.269 21:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.838 00:12:14.838 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.838 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.838 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.097 { 00:12:15.097 "cntlid": 141, 00:12:15.097 "qid": 0, 00:12:15.097 "state": "enabled", 00:12:15.097 "thread": "nvmf_tgt_poll_group_000", 00:12:15.097 "listen_address": { 00:12:15.097 "trtype": "TCP", 00:12:15.097 "adrfam": "IPv4", 00:12:15.097 "traddr": "10.0.0.2", 00:12:15.097 "trsvcid": "4420" 00:12:15.097 }, 00:12:15.097 "peer_address": { 00:12:15.097 "trtype": "TCP", 00:12:15.097 "adrfam": "IPv4", 00:12:15.097 "traddr": "10.0.0.1", 00:12:15.097 "trsvcid": "59032" 00:12:15.097 }, 00:12:15.097 "auth": { 00:12:15.097 "state": "completed", 00:12:15.097 "digest": "sha512", 00:12:15.097 "dhgroup": "ffdhe8192" 00:12:15.097 } 00:12:15.097 } 00:12:15.097 ]' 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.097 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.356 21:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:02:NTIyYTU5YjYzNTBlYmI0NjkxYTA4ZWEzOTczM2RjYzAyNDU1YTU3ZDlmYmJkZGFhmO9NWw==: --dhchap-ctrl-secret DHHC-1:01:ZjUzYjY4MmQ5NDM3NzZmMTYyYmY0OTMwMGU3N2EyMTOkCNRH: 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:15.924 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.182 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.748 00:12:16.748 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.748 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.748 21:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.748 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.748 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.748 21:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.748 21:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.006 21:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.006 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.006 { 00:12:17.006 "cntlid": 143, 00:12:17.007 "qid": 0, 00:12:17.007 "state": "enabled", 00:12:17.007 "thread": "nvmf_tgt_poll_group_000", 00:12:17.007 "listen_address": { 00:12:17.007 "trtype": "TCP", 00:12:17.007 "adrfam": "IPv4", 00:12:17.007 "traddr": "10.0.0.2", 00:12:17.007 "trsvcid": "4420" 00:12:17.007 }, 00:12:17.007 "peer_address": { 00:12:17.007 "trtype": "TCP", 00:12:17.007 "adrfam": "IPv4", 00:12:17.007 "traddr": "10.0.0.1", 00:12:17.007 "trsvcid": "59066" 00:12:17.007 }, 00:12:17.007 "auth": { 00:12:17.007 "state": "completed", 00:12:17.007 "digest": "sha512", 00:12:17.007 "dhgroup": "ffdhe8192" 00:12:17.007 } 00:12:17.007 } 00:12:17.007 ]' 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.007 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.265 21:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.832 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.091 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.657 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.657 { 00:12:18.657 "cntlid": 145, 00:12:18.657 "qid": 0, 00:12:18.657 "state": "enabled", 00:12:18.657 "thread": "nvmf_tgt_poll_group_000", 00:12:18.657 "listen_address": { 00:12:18.657 "trtype": "TCP", 00:12:18.657 "adrfam": "IPv4", 00:12:18.657 "traddr": "10.0.0.2", 00:12:18.657 "trsvcid": "4420" 00:12:18.657 }, 00:12:18.657 "peer_address": { 00:12:18.657 "trtype": "TCP", 00:12:18.657 "adrfam": "IPv4", 00:12:18.657 "traddr": "10.0.0.1", 00:12:18.657 "trsvcid": "59090" 00:12:18.657 }, 00:12:18.657 "auth": { 00:12:18.657 "state": "completed", 00:12:18.657 "digest": "sha512", 00:12:18.657 "dhgroup": "ffdhe8192" 00:12:18.657 } 00:12:18.657 } 00:12:18.657 ]' 00:12:18.657 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.657 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.657 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.914 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.914 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.914 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.914 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.914 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.914 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:00:YmMyNTA0NzFjMDkzYjdlMzMyNWNlMDBmMDM4YzdlZGMwYjIwMmJmOGFjY2M1MTg4n1uSvA==: --dhchap-ctrl-secret DHHC-1:03:YTNjOGNmZmQxMmRhMDBmZGZkYTA1NmVhNGFmYjBlMzVjM2ExMTlkODY0NTNmY2FkNGIzMGVmOGYwZTM5OGFjMbrQcaU=: 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.480 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:19.738 21:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:19.996 request: 00:12:19.996 { 00:12:19.996 "name": "nvme0", 00:12:19.996 "trtype": "tcp", 00:12:19.996 "traddr": "10.0.0.2", 00:12:19.996 "adrfam": "ipv4", 00:12:19.996 "trsvcid": "4420", 00:12:19.996 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:19.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66", 00:12:19.996 "prchk_reftag": false, 00:12:19.996 "prchk_guard": false, 00:12:19.996 "hdgst": false, 00:12:19.996 "ddgst": false, 00:12:19.996 "dhchap_key": "key2", 00:12:19.996 "method": "bdev_nvme_attach_controller", 00:12:19.996 "req_id": 1 00:12:19.996 } 00:12:19.996 Got JSON-RPC error response 00:12:19.996 response: 00:12:19.996 { 00:12:19.996 "code": -5, 00:12:19.996 "message": "Input/output error" 00:12:19.996 } 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.996 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.252 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:20.509 request: 00:12:20.509 { 00:12:20.509 "name": "nvme0", 00:12:20.509 "trtype": "tcp", 00:12:20.509 "traddr": "10.0.0.2", 00:12:20.509 "adrfam": "ipv4", 00:12:20.509 "trsvcid": "4420", 00:12:20.509 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:20.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66", 00:12:20.509 "prchk_reftag": false, 00:12:20.509 "prchk_guard": false, 00:12:20.509 "hdgst": false, 00:12:20.509 "ddgst": false, 00:12:20.509 "dhchap_key": "key1", 00:12:20.509 "dhchap_ctrlr_key": "ckey2", 00:12:20.509 "method": "bdev_nvme_attach_controller", 00:12:20.509 "req_id": 1 00:12:20.509 } 00:12:20.509 Got JSON-RPC error response 00:12:20.509 response: 00:12:20.509 { 00:12:20.509 "code": -5, 00:12:20.509 "message": "Input/output error" 00:12:20.509 } 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key1 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.768 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.026 request: 00:12:21.026 { 00:12:21.026 "name": "nvme0", 00:12:21.026 "trtype": "tcp", 00:12:21.026 "traddr": "10.0.0.2", 00:12:21.026 "adrfam": "ipv4", 00:12:21.026 "trsvcid": "4420", 00:12:21.026 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:21.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66", 00:12:21.026 "prchk_reftag": false, 00:12:21.026 "prchk_guard": false, 00:12:21.026 "hdgst": false, 00:12:21.026 "ddgst": false, 00:12:21.026 "dhchap_key": "key1", 00:12:21.026 "dhchap_ctrlr_key": "ckey1", 00:12:21.026 "method": "bdev_nvme_attach_controller", 00:12:21.026 "req_id": 1 00:12:21.026 } 00:12:21.026 Got JSON-RPC error response 00:12:21.026 response: 00:12:21.026 { 00:12:21.026 "code": -5, 00:12:21.026 "message": "Input/output error" 00:12:21.026 } 00:12:21.282 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:21.282 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:21.282 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69099 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69099 ']' 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69099 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69099 00:12:21.283 killing process with pid 69099 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69099' 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69099 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69099 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:21.283 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71777 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71777 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 71777 ']' 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.540 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:22.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71777 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 71777 ']' 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.522 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.781 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.349 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.349 { 00:12:23.349 "cntlid": 1, 00:12:23.349 "qid": 0, 00:12:23.349 "state": "enabled", 00:12:23.349 "thread": "nvmf_tgt_poll_group_000", 00:12:23.349 "listen_address": { 00:12:23.349 "trtype": "TCP", 00:12:23.349 "adrfam": "IPv4", 00:12:23.349 "traddr": "10.0.0.2", 00:12:23.349 "trsvcid": "4420" 00:12:23.349 }, 00:12:23.349 "peer_address": { 00:12:23.349 "trtype": "TCP", 00:12:23.349 "adrfam": "IPv4", 00:12:23.349 "traddr": "10.0.0.1", 00:12:23.349 "trsvcid": "49884" 00:12:23.349 }, 00:12:23.349 "auth": { 00:12:23.349 "state": "completed", 00:12:23.349 "digest": "sha512", 00:12:23.349 "dhgroup": "ffdhe8192" 00:12:23.349 } 00:12:23.349 } 00:12:23.349 ]' 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.349 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.607 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.607 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.607 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.607 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.607 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.867 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-secret DHHC-1:03:NTFjMWIzNTU1NDE3Y2RmMGIwOGIwODM5ZDk2MjU1MTg2ZWQxNWFjZDM5NTA0MDk0ZDlmODQ2NWEwOTk4MTU4Y6ufyrE=: 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --dhchap-key key3 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.434 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.694 request: 00:12:24.694 { 00:12:24.694 "name": "nvme0", 00:12:24.694 "trtype": "tcp", 00:12:24.694 "traddr": "10.0.0.2", 00:12:24.694 "adrfam": "ipv4", 00:12:24.694 "trsvcid": "4420", 00:12:24.694 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:24.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66", 00:12:24.695 "prchk_reftag": false, 00:12:24.695 "prchk_guard": false, 00:12:24.695 "hdgst": false, 00:12:24.695 "ddgst": false, 00:12:24.695 "dhchap_key": "key3", 00:12:24.695 "method": "bdev_nvme_attach_controller", 00:12:24.695 "req_id": 1 00:12:24.695 } 00:12:24.695 Got JSON-RPC error response 00:12:24.695 response: 00:12:24.695 { 00:12:24.695 "code": -5, 00:12:24.695 "message": "Input/output error" 00:12:24.695 } 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:24.695 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:24.953 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.211 request: 00:12:25.211 { 00:12:25.211 "name": "nvme0", 00:12:25.211 "trtype": "tcp", 00:12:25.211 "traddr": "10.0.0.2", 00:12:25.211 "adrfam": "ipv4", 00:12:25.211 "trsvcid": "4420", 00:12:25.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:25.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66", 00:12:25.211 "prchk_reftag": false, 00:12:25.211 "prchk_guard": false, 00:12:25.211 "hdgst": false, 00:12:25.211 "ddgst": false, 00:12:25.211 "dhchap_key": "key3", 00:12:25.211 "method": "bdev_nvme_attach_controller", 00:12:25.211 "req_id": 1 00:12:25.211 } 00:12:25.211 Got JSON-RPC error response 00:12:25.211 response: 00:12:25.211 { 00:12:25.211 "code": -5, 00:12:25.211 "message": "Input/output error" 00:12:25.211 } 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.211 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:25.470 request: 00:12:25.470 { 00:12:25.470 "name": "nvme0", 00:12:25.470 "trtype": "tcp", 00:12:25.470 "traddr": "10.0.0.2", 00:12:25.470 "adrfam": "ipv4", 00:12:25.470 "trsvcid": "4420", 00:12:25.470 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:25.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66", 00:12:25.470 "prchk_reftag": false, 00:12:25.470 "prchk_guard": false, 00:12:25.470 "hdgst": false, 00:12:25.470 "ddgst": false, 00:12:25.470 "dhchap_key": "key0", 00:12:25.470 "dhchap_ctrlr_key": "key1", 00:12:25.470 "method": "bdev_nvme_attach_controller", 00:12:25.470 "req_id": 1 00:12:25.470 } 00:12:25.470 Got JSON-RPC error response 00:12:25.470 response: 00:12:25.470 { 00:12:25.470 "code": -5, 00:12:25.470 "message": "Input/output error" 00:12:25.470 } 00:12:25.470 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:25.470 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.470 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.470 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.470 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:25.470 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:25.729 00:12:25.729 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:25.729 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:25.729 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.988 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.988 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.988 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.248 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:26.248 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:26.248 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69131 00:12:26.248 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69131 ']' 00:12:26.248 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69131 00:12:26.248 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:26.248 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.249 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69131 00:12:26.249 killing process with pid 69131 00:12:26.249 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:26.249 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:26.249 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69131' 00:12:26.249 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69131 00:12:26.249 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69131 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.508 rmmod nvme_tcp 00:12:26.508 rmmod nvme_fabrics 00:12:26.508 rmmod nvme_keyring 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71777 ']' 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71777 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 71777 ']' 00:12:26.508 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 71777 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71777 00:12:26.767 killing process with pid 71777 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71777' 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 71777 00:12:26.767 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 71777 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.767 21:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.027 21:26:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:27.027 21:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.akQ /tmp/spdk.key-sha256.4wz /tmp/spdk.key-sha384.OGo /tmp/spdk.key-sha512.FZt /tmp/spdk.key-sha512.W5v /tmp/spdk.key-sha384.wJz /tmp/spdk.key-sha256.BHK '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:27.027 00:12:27.027 real 2m13.034s 00:12:27.027 user 5m6.210s 00:12:27.027 sys 0m28.107s 00:12:27.027 21:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.027 21:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.027 ************************************ 00:12:27.027 END TEST nvmf_auth_target 00:12:27.027 ************************************ 00:12:27.027 21:26:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:27.027 21:26:00 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:12:27.027 21:26:00 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:27.027 21:26:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:27.027 21:26:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.027 21:26:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:27.027 ************************************ 00:12:27.027 START TEST nvmf_bdevio_no_huge 00:12:27.027 ************************************ 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:27.027 * Looking for test storage... 00:12:27.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.027 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:27.287 Cannot find device "nvmf_tgt_br" 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:27.287 Cannot find device "nvmf_tgt_br2" 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:27.287 Cannot find device "nvmf_tgt_br" 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:27.287 Cannot find device "nvmf_tgt_br2" 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:27.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:27.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:27.287 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:27.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:12:27.547 00:12:27.547 --- 10.0.0.2 ping statistics --- 00:12:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.547 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:27.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:27.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:27.547 00:12:27.547 --- 10.0.0.3 ping statistics --- 00:12:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.547 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:27.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:12:27.547 00:12:27.547 --- 10.0.0.1 ping statistics --- 00:12:27.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.547 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72081 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72081 00:12:27.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72081 ']' 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.547 21:26:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:27.806 [2024-07-15 21:26:00.916501] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:27.806 [2024-07-15 21:26:00.916560] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:27.806 [2024-07-15 21:26:01.061801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.064 [2024-07-15 21:26:01.183631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.064 [2024-07-15 21:26:01.184024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.064 [2024-07-15 21:26:01.184365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.064 [2024-07-15 21:26:01.184655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.064 [2024-07-15 21:26:01.184918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.064 [2024-07-15 21:26:01.185326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:28.064 [2024-07-15 21:26:01.185508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:28.064 [2024-07-15 21:26:01.185509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.064 [2024-07-15 21:26:01.185419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:28.064 [2024-07-15 21:26:01.189972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:28.649 [2024-07-15 21:26:01.891763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:28.649 Malloc0 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:28.649 [2024-07-15 21:26:01.939834] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:28.649 { 00:12:28.649 "params": { 00:12:28.649 "name": "Nvme$subsystem", 00:12:28.649 "trtype": "$TEST_TRANSPORT", 00:12:28.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:28.649 "adrfam": "ipv4", 00:12:28.649 "trsvcid": "$NVMF_PORT", 00:12:28.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:28.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:28.649 "hdgst": ${hdgst:-false}, 00:12:28.649 "ddgst": ${ddgst:-false} 00:12:28.649 }, 00:12:28.649 "method": "bdev_nvme_attach_controller" 00:12:28.649 } 00:12:28.649 EOF 00:12:28.649 )") 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:28.649 21:26:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:28.649 "params": { 00:12:28.649 "name": "Nvme1", 00:12:28.649 "trtype": "tcp", 00:12:28.649 "traddr": "10.0.0.2", 00:12:28.649 "adrfam": "ipv4", 00:12:28.649 "trsvcid": "4420", 00:12:28.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:28.649 "hdgst": false, 00:12:28.649 "ddgst": false 00:12:28.649 }, 00:12:28.649 "method": "bdev_nvme_attach_controller" 00:12:28.649 }' 00:12:28.649 [2024-07-15 21:26:01.993012] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:28.649 [2024-07-15 21:26:01.993076] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72116 ] 00:12:28.908 [2024-07-15 21:26:02.128121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:28.908 [2024-07-15 21:26:02.251989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.908 [2024-07-15 21:26:02.252169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.908 [2024-07-15 21:26:02.252170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.908 [2024-07-15 21:26:02.264382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:29.168 I/O targets: 00:12:29.168 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:29.168 00:12:29.168 00:12:29.168 CUnit - A unit testing framework for C - Version 2.1-3 00:12:29.168 http://cunit.sourceforge.net/ 00:12:29.168 00:12:29.168 00:12:29.168 Suite: bdevio tests on: Nvme1n1 00:12:29.168 Test: blockdev write read block ...passed 00:12:29.168 Test: blockdev write zeroes read block ...passed 00:12:29.168 Test: blockdev write zeroes read no split ...passed 00:12:29.168 Test: blockdev write zeroes read split ...passed 00:12:29.168 Test: blockdev write zeroes read split partial ...passed 00:12:29.168 Test: blockdev reset ...[2024-07-15 21:26:02.451136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:29.168 [2024-07-15 21:26:02.451357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d7870 (9): Bad file descriptor 00:12:29.168 [2024-07-15 21:26:02.471693] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:29.168 passed 00:12:29.168 Test: blockdev write read 8 blocks ...passed 00:12:29.168 Test: blockdev write read size > 128k ...passed 00:12:29.168 Test: blockdev write read invalid size ...passed 00:12:29.168 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.168 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.168 Test: blockdev write read max offset ...passed 00:12:29.168 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.168 Test: blockdev writev readv 8 blocks ...passed 00:12:29.168 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.168 Test: blockdev writev readv block ...passed 00:12:29.168 Test: blockdev writev readv size > 128k ...passed 00:12:29.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.168 Test: blockdev comparev and writev ...[2024-07-15 21:26:02.479498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.479638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.479662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.479673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.479908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.479921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.479935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.479945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.480167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.480178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.480191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.480199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.480409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.480420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.480433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:29.168 [2024-07-15 21:26:02.480441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:29.168 passed 00:12:29.168 Test: blockdev nvme passthru rw ...passed 00:12:29.168 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:26:02.481404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:29.168 [2024-07-15 21:26:02.481419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.481494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:29.168 [2024-07-15 21:26:02.481505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.481581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:29.168 [2024-07-15 21:26:02.481592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:29.168 [2024-07-15 21:26:02.481672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:29.168 [2024-07-15 21:26:02.481683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:29.168 passed 00:12:29.168 Test: blockdev nvme admin passthru ...passed 00:12:29.168 Test: blockdev copy ...passed 00:12:29.168 00:12:29.168 Run Summary: Type Total Ran Passed Failed Inactive 00:12:29.168 suites 1 1 n/a 0 0 00:12:29.168 tests 23 23 23 0 0 00:12:29.168 asserts 152 152 152 0 n/a 00:12:29.168 00:12:29.168 Elapsed time = 0.180 seconds 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:29.737 rmmod nvme_tcp 00:12:29.737 rmmod nvme_fabrics 00:12:29.737 rmmod nvme_keyring 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72081 ']' 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72081 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72081 ']' 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72081 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72081 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:29.737 killing process with pid 72081 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72081' 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72081 00:12:29.737 21:26:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72081 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:30.304 ************************************ 00:12:30.304 END TEST nvmf_bdevio_no_huge 00:12:30.304 ************************************ 00:12:30.304 00:12:30.304 real 0m3.207s 00:12:30.304 user 0m9.862s 00:12:30.304 sys 0m1.393s 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.304 21:26:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:30.304 21:26:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:30.304 21:26:03 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:30.304 21:26:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.304 21:26:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.304 21:26:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.304 ************************************ 00:12:30.304 START TEST nvmf_tls 00:12:30.304 ************************************ 00:12:30.304 21:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:30.304 * Looking for test storage... 00:12:30.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:30.304 21:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.563 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:30.564 Cannot find device "nvmf_tgt_br" 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:30.564 Cannot find device "nvmf_tgt_br2" 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:30.564 Cannot find device "nvmf_tgt_br" 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:30.564 Cannot find device "nvmf_tgt_br2" 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:30.564 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:30.823 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:30.823 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:30.823 21:26:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:30.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:12:30.823 00:12:30.823 --- 10.0.0.2 ping statistics --- 00:12:30.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.823 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:30.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:12:30.823 00:12:30.823 --- 10.0.0.3 ping statistics --- 00:12:30.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.823 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:30.823 00:12:30.823 --- 10.0.0.1 ping statistics --- 00:12:30.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.823 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.823 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:30.824 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.824 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.824 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.824 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.824 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.824 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.824 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72303 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72303 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72303 ']' 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.083 21:26:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:31.083 [2024-07-15 21:26:04.270987] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:31.083 [2024-07-15 21:26:04.271040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.083 [2024-07-15 21:26:04.416131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.343 [2024-07-15 21:26:04.491966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.343 [2024-07-15 21:26:04.492013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.343 [2024-07-15 21:26:04.492023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.343 [2024-07-15 21:26:04.492031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.343 [2024-07-15 21:26:04.492038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.343 [2024-07-15 21:26:04.492067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:31.909 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:32.167 true 00:12:32.167 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:32.167 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:32.167 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:32.167 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:32.167 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:32.425 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:32.425 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:32.682 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:32.682 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:32.682 21:26:05 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:32.940 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:32.940 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:32.940 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:32.940 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:32.940 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:32.940 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:33.198 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:33.198 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:33.198 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:33.456 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:33.456 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:33.713 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:33.713 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:33.713 21:26:06 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:33.713 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:33.713 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:34.004 21:26:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ua2dEy1srG 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.aDf6QsOJqu 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ua2dEy1srG 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aDf6QsOJqu 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:34.283 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:34.540 [2024-07-15 21:26:07.862516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:34.798 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ua2dEy1srG 00:12:34.798 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ua2dEy1srG 00:12:34.798 21:26:07 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:34.798 [2024-07-15 21:26:08.086079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.798 21:26:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:35.056 21:26:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:35.314 [2024-07-15 21:26:08.445847] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:35.314 [2024-07-15 21:26:08.446022] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.314 21:26:08 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:35.314 malloc0 00:12:35.314 21:26:08 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:35.572 21:26:08 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ua2dEy1srG 00:12:35.830 [2024-07-15 21:26:09.001915] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:35.830 21:26:09 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ua2dEy1srG 00:12:48.030 Initializing NVMe Controllers 00:12:48.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:48.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:48.030 Initialization complete. Launching workers. 00:12:48.030 ======================================================== 00:12:48.030 Latency(us) 00:12:48.030 Device Information : IOPS MiB/s Average min max 00:12:48.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14363.36 56.11 4456.31 1289.14 6425.45 00:12:48.030 ======================================================== 00:12:48.030 Total : 14363.36 56.11 4456.31 1289.14 6425.45 00:12:48.030 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ua2dEy1srG 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ua2dEy1srG' 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72518 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72518 /var/tmp/bdevperf.sock 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72518 ']' 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.030 21:26:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:48.030 [2024-07-15 21:26:19.251595] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:48.030 [2024-07-15 21:26:19.251802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72518 ] 00:12:48.030 [2024-07-15 21:26:19.387591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.030 [2024-07-15 21:26:19.474651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.030 [2024-07-15 21:26:19.515941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:48.030 21:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.030 21:26:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:48.030 21:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ua2dEy1srG 00:12:48.030 [2024-07-15 21:26:20.329507] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:48.030 [2024-07-15 21:26:20.329855] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:48.030 TLSTESTn1 00:12:48.030 21:26:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:48.030 Running I/O for 10 seconds... 00:12:58.005 00:12:58.005 Latency(us) 00:12:58.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.005 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:58.005 Verification LBA range: start 0x0 length 0x2000 00:12:58.005 TLSTESTn1 : 10.01 5800.98 22.66 0.00 0.00 22031.36 4316.43 18634.33 00:12:58.005 =================================================================================================================== 00:12:58.005 Total : 5800.98 22.66 0.00 0.00 22031.36 4316.43 18634.33 00:12:58.005 0 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72518 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72518 ']' 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72518 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72518 00:12:58.005 killing process with pid 72518 00:12:58.005 Received shutdown signal, test time was about 10.000000 seconds 00:12:58.005 00:12:58.005 Latency(us) 00:12:58.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.005 =================================================================================================================== 00:12:58.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72518' 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72518 00:12:58.005 [2024-07-15 21:26:30.577582] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72518 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aDf6QsOJqu 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aDf6QsOJqu 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aDf6QsOJqu 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aDf6QsOJqu' 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72646 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72646 /var/tmp/bdevperf.sock 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72646 ']' 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:58.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.005 21:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.005 [2024-07-15 21:26:30.806207] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:58.005 [2024-07-15 21:26:30.806369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72646 ] 00:12:58.005 [2024-07-15 21:26:30.947079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.006 [2024-07-15 21:26:31.033743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.006 [2024-07-15 21:26:31.074872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aDf6QsOJqu 00:12:58.572 [2024-07-15 21:26:31.800239] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:58.572 [2024-07-15 21:26:31.800337] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:58.572 [2024-07-15 21:26:31.804753] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:58.572 [2024-07-15 21:26:31.805397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f41f0 (107): Transport endpoint is not connected 00:12:58.572 [2024-07-15 21:26:31.806382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f41f0 (9): Bad file descriptor 00:12:58.572 [2024-07-15 21:26:31.807379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:58.572 [2024-07-15 21:26:31.807398] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:58.572 [2024-07-15 21:26:31.807410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:58.572 request: 00:12:58.572 { 00:12:58.572 "name": "TLSTEST", 00:12:58.572 "trtype": "tcp", 00:12:58.572 "traddr": "10.0.0.2", 00:12:58.572 "adrfam": "ipv4", 00:12:58.572 "trsvcid": "4420", 00:12:58.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:58.572 "prchk_reftag": false, 00:12:58.572 "prchk_guard": false, 00:12:58.572 "hdgst": false, 00:12:58.572 "ddgst": false, 00:12:58.572 "psk": "/tmp/tmp.aDf6QsOJqu", 00:12:58.572 "method": "bdev_nvme_attach_controller", 00:12:58.572 "req_id": 1 00:12:58.572 } 00:12:58.572 Got JSON-RPC error response 00:12:58.572 response: 00:12:58.572 { 00:12:58.572 "code": -5, 00:12:58.572 "message": "Input/output error" 00:12:58.572 } 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72646 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72646 ']' 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72646 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72646 00:12:58.572 killing process with pid 72646 00:12:58.572 Received shutdown signal, test time was about 10.000000 seconds 00:12:58.572 00:12:58.572 Latency(us) 00:12:58.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.572 =================================================================================================================== 00:12:58.572 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72646' 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72646 00:12:58.572 [2024-07-15 21:26:31.852434] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:58.572 21:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72646 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ua2dEy1srG 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ua2dEy1srG 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ua2dEy1srG 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ua2dEy1srG' 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72674 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72674 /var/tmp/bdevperf.sock 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72674 ']' 00:12:58.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.829 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.829 [2024-07-15 21:26:32.074560] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:12:58.829 [2024-07-15 21:26:32.074620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72674 ] 00:12:59.087 [2024-07-15 21:26:32.209240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.087 [2024-07-15 21:26:32.286272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.087 [2024-07-15 21:26:32.327608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:59.652 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.652 21:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:59.652 21:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ua2dEy1srG 00:12:59.910 [2024-07-15 21:26:33.136603] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:59.910 [2024-07-15 21:26:33.136704] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:59.910 [2024-07-15 21:26:33.141733] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:59.910 [2024-07-15 21:26:33.141771] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:59.911 [2024-07-15 21:26:33.141816] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:59.911 [2024-07-15 21:26:33.142757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b51f0 (107): Transport endpoint is not connected 00:12:59.911 [2024-07-15 21:26:33.143745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b51f0 (9): Bad file descriptor 00:12:59.911 [2024-07-15 21:26:33.144742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:59.911 [2024-07-15 21:26:33.144763] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:59.911 [2024-07-15 21:26:33.144775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:59.911 request: 00:12:59.911 { 00:12:59.911 "name": "TLSTEST", 00:12:59.911 "trtype": "tcp", 00:12:59.911 "traddr": "10.0.0.2", 00:12:59.911 "adrfam": "ipv4", 00:12:59.911 "trsvcid": "4420", 00:12:59.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.911 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:59.911 "prchk_reftag": false, 00:12:59.911 "prchk_guard": false, 00:12:59.911 "hdgst": false, 00:12:59.911 "ddgst": false, 00:12:59.911 "psk": "/tmp/tmp.ua2dEy1srG", 00:12:59.911 "method": "bdev_nvme_attach_controller", 00:12:59.911 "req_id": 1 00:12:59.911 } 00:12:59.911 Got JSON-RPC error response 00:12:59.911 response: 00:12:59.911 { 00:12:59.911 "code": -5, 00:12:59.911 "message": "Input/output error" 00:12:59.911 } 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72674 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72674 ']' 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72674 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72674 00:12:59.911 killing process with pid 72674 00:12:59.911 Received shutdown signal, test time was about 10.000000 seconds 00:12:59.911 00:12:59.911 Latency(us) 00:12:59.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.911 =================================================================================================================== 00:12:59.911 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72674' 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72674 00:12:59.911 [2024-07-15 21:26:33.190975] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:59.911 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72674 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ua2dEy1srG 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ua2dEy1srG 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ua2dEy1srG 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ua2dEy1srG' 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72701 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72701 /var/tmp/bdevperf.sock 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72701 ']' 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:00.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.168 21:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:00.168 [2024-07-15 21:26:33.427417] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:00.168 [2024-07-15 21:26:33.427511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72701 ] 00:13:00.427 [2024-07-15 21:26:33.567038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.427 [2024-07-15 21:26:33.653587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.427 [2024-07-15 21:26:33.694717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:00.994 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.994 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:00.994 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ua2dEy1srG 00:13:01.253 [2024-07-15 21:26:34.432042] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:01.253 [2024-07-15 21:26:34.432301] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:01.253 [2024-07-15 21:26:34.439764] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:01.253 [2024-07-15 21:26:34.439801] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:01.253 [2024-07-15 21:26:34.439862] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:01.253 [2024-07-15 21:26:34.440505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15501f0 (107): Transport endpoint is not connected 00:13:01.253 [2024-07-15 21:26:34.441493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15501f0 (9): Bad file descriptor 00:13:01.253 [2024-07-15 21:26:34.442489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:01.253 [2024-07-15 21:26:34.442508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:01.254 [2024-07-15 21:26:34.442520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:01.254 request: 00:13:01.254 { 00:13:01.254 "name": "TLSTEST", 00:13:01.254 "trtype": "tcp", 00:13:01.254 "traddr": "10.0.0.2", 00:13:01.254 "adrfam": "ipv4", 00:13:01.254 "trsvcid": "4420", 00:13:01.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:01.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:01.254 "prchk_reftag": false, 00:13:01.254 "prchk_guard": false, 00:13:01.254 "hdgst": false, 00:13:01.254 "ddgst": false, 00:13:01.254 "psk": "/tmp/tmp.ua2dEy1srG", 00:13:01.254 "method": "bdev_nvme_attach_controller", 00:13:01.254 "req_id": 1 00:13:01.254 } 00:13:01.254 Got JSON-RPC error response 00:13:01.254 response: 00:13:01.254 { 00:13:01.254 "code": -5, 00:13:01.254 "message": "Input/output error" 00:13:01.254 } 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72701 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72701 ']' 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72701 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72701 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72701' 00:13:01.254 killing process with pid 72701 00:13:01.254 Received shutdown signal, test time was about 10.000000 seconds 00:13:01.254 00:13:01.254 Latency(us) 00:13:01.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.254 =================================================================================================================== 00:13:01.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72701 00:13:01.254 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72701 00:13:01.254 [2024-07-15 21:26:34.496490] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72723 00:13:01.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72723 /var/tmp/bdevperf.sock 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72723 ']' 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.513 21:26:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:01.513 [2024-07-15 21:26:34.729831] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:01.513 [2024-07-15 21:26:34.730411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72723 ] 00:13:01.513 [2024-07-15 21:26:34.870587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.771 [2024-07-15 21:26:34.947863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.771 [2024-07-15 21:26:34.988989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:02.335 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.335 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:02.335 21:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:02.619 [2024-07-15 21:26:35.747790] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:02.619 [2024-07-15 21:26:35.749638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe08c00 (9): Bad file descriptor 00:13:02.619 [2024-07-15 21:26:35.750634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:02.619 [2024-07-15 21:26:35.750761] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:02.619 [2024-07-15 21:26:35.750848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:02.619 request: 00:13:02.619 { 00:13:02.619 "name": "TLSTEST", 00:13:02.619 "trtype": "tcp", 00:13:02.619 "traddr": "10.0.0.2", 00:13:02.619 "adrfam": "ipv4", 00:13:02.619 "trsvcid": "4420", 00:13:02.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:02.619 "prchk_reftag": false, 00:13:02.619 "prchk_guard": false, 00:13:02.619 "hdgst": false, 00:13:02.619 "ddgst": false, 00:13:02.619 "method": "bdev_nvme_attach_controller", 00:13:02.619 "req_id": 1 00:13:02.619 } 00:13:02.619 Got JSON-RPC error response 00:13:02.619 response: 00:13:02.619 { 00:13:02.619 "code": -5, 00:13:02.619 "message": "Input/output error" 00:13:02.619 } 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72723 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72723 ']' 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72723 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72723 00:13:02.619 killing process with pid 72723 00:13:02.619 Received shutdown signal, test time was about 10.000000 seconds 00:13:02.619 00:13:02.619 Latency(us) 00:13:02.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.619 =================================================================================================================== 00:13:02.619 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72723' 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72723 00:13:02.619 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72723 00:13:02.877 21:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:02.877 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:02.877 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:02.877 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:02.878 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:02.878 21:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72303 00:13:02.878 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72303 ']' 00:13:02.878 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72303 00:13:02.878 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:02.878 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.878 21:26:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72303 00:13:02.878 killing process with pid 72303 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72303' 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72303 00:13:02.878 [2024-07-15 21:26:36.029534] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72303 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:02.878 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.vFu5WQSIs8 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.vFu5WQSIs8 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72761 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72761 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72761 ']' 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.135 21:26:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.135 [2024-07-15 21:26:36.339469] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:03.135 [2024-07-15 21:26:36.339526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.135 [2024-07-15 21:26:36.481509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.393 [2024-07-15 21:26:36.573500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.393 [2024-07-15 21:26:36.573666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.393 [2024-07-15 21:26:36.573752] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.393 [2024-07-15 21:26:36.573797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.393 [2024-07-15 21:26:36.573832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.393 [2024-07-15 21:26:36.573888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.393 [2024-07-15 21:26:36.614716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.vFu5WQSIs8 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vFu5WQSIs8 00:13:03.959 21:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:04.216 [2024-07-15 21:26:37.421615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.216 21:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:04.475 21:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:04.733 [2024-07-15 21:26:37.876960] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:04.733 [2024-07-15 21:26:37.877158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.733 21:26:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:04.733 malloc0 00:13:04.733 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:04.991 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vFu5WQSIs8 00:13:05.250 [2024-07-15 21:26:38.432874] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFu5WQSIs8 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vFu5WQSIs8' 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72810 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72810 /var/tmp/bdevperf.sock 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72810 ']' 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:05.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.250 21:26:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:05.251 [2024-07-15 21:26:38.501682] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:05.251 [2024-07-15 21:26:38.501919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72810 ] 00:13:05.510 [2024-07-15 21:26:38.641997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.510 [2024-07-15 21:26:38.738773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.510 [2024-07-15 21:26:38.780261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:06.076 21:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.076 21:26:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:06.076 21:26:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vFu5WQSIs8 00:13:06.335 [2024-07-15 21:26:39.506937] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:06.335 [2024-07-15 21:26:39.507035] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:06.335 TLSTESTn1 00:13:06.335 21:26:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:06.335 Running I/O for 10 seconds... 00:13:16.346 00:13:16.346 Latency(us) 00:13:16.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.346 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:16.346 Verification LBA range: start 0x0 length 0x2000 00:13:16.346 TLSTESTn1 : 10.01 5705.54 22.29 0.00 0.00 22398.48 5211.30 18002.66 00:13:16.346 =================================================================================================================== 00:13:16.346 Total : 5705.54 22.29 0.00 0.00 22398.48 5211.30 18002.66 00:13:16.346 0 00:13:16.346 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:16.346 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 72810 00:13:16.346 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72810 ']' 00:13:16.346 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72810 00:13:16.346 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72810 00:13:16.605 killing process with pid 72810 00:13:16.605 Received shutdown signal, test time was about 10.000000 seconds 00:13:16.605 00:13:16.605 Latency(us) 00:13:16.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.605 =================================================================================================================== 00:13:16.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72810' 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72810 00:13:16.605 [2024-07-15 21:26:49.748082] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72810 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.vFu5WQSIs8 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFu5WQSIs8 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFu5WQSIs8 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFu5WQSIs8 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:16.605 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.vFu5WQSIs8' 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72943 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72943 /var/tmp/bdevperf.sock 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72943 ']' 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.606 21:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:16.865 [2024-07-15 21:26:49.991690] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:16.865 [2024-07-15 21:26:49.991756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:13:16.865 [2024-07-15 21:26:50.132537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.865 [2024-07-15 21:26:50.221078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.123 [2024-07-15 21:26:50.262183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:17.690 21:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.690 21:26:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:17.690 21:26:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vFu5WQSIs8 00:13:17.949 [2024-07-15 21:26:51.063326] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:17.949 [2024-07-15 21:26:51.063388] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:17.949 [2024-07-15 21:26:51.063397] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.vFu5WQSIs8 00:13:17.949 request: 00:13:17.949 { 00:13:17.949 "name": "TLSTEST", 00:13:17.949 "trtype": "tcp", 00:13:17.949 "traddr": "10.0.0.2", 00:13:17.949 "adrfam": "ipv4", 00:13:17.949 "trsvcid": "4420", 00:13:17.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.949 "prchk_reftag": false, 00:13:17.949 "prchk_guard": false, 00:13:17.949 "hdgst": false, 00:13:17.949 "ddgst": false, 00:13:17.949 "psk": "/tmp/tmp.vFu5WQSIs8", 00:13:17.949 "method": "bdev_nvme_attach_controller", 00:13:17.949 "req_id": 1 00:13:17.949 } 00:13:17.949 Got JSON-RPC error response 00:13:17.949 response: 00:13:17.949 { 00:13:17.949 "code": -1, 00:13:17.949 "message": "Operation not permitted" 00:13:17.949 } 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 72943 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72943 ']' 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72943 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72943 00:13:17.949 killing process with pid 72943 00:13:17.949 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.949 00:13:17.949 Latency(us) 00:13:17.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.949 =================================================================================================================== 00:13:17.949 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72943' 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72943 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72943 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 72761 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72761 ']' 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72761 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:17.949 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72761 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72761' 00:13:18.209 killing process with pid 72761 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72761 00:13:18.209 [2024-07-15 21:26:51.347441] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72761 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72977 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72977 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 72977 ']' 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.209 21:26:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.471 [2024-07-15 21:26:51.602717] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:18.471 [2024-07-15 21:26:51.603185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.471 [2024-07-15 21:26:51.745505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.471 [2024-07-15 21:26:51.828865] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.471 [2024-07-15 21:26:51.828909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.471 [2024-07-15 21:26:51.828919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.471 [2024-07-15 21:26:51.828927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.471 [2024-07-15 21:26:51.828934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.471 [2024-07-15 21:26:51.828962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.730 [2024-07-15 21:26:51.869504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.vFu5WQSIs8 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vFu5WQSIs8 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.vFu5WQSIs8 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vFu5WQSIs8 00:13:19.298 21:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:19.558 [2024-07-15 21:26:52.690759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.558 21:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:19.558 21:26:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:19.942 [2024-07-15 21:26:53.046228] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:19.942 [2024-07-15 21:26:53.046413] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.942 21:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:19.942 malloc0 00:13:19.942 21:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:20.200 21:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vFu5WQSIs8 00:13:20.459 [2024-07-15 21:26:53.586324] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:20.459 [2024-07-15 21:26:53.586367] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:20.459 [2024-07-15 21:26:53.586401] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:20.459 request: 00:13:20.459 { 00:13:20.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.459 "host": "nqn.2016-06.io.spdk:host1", 00:13:20.459 "psk": "/tmp/tmp.vFu5WQSIs8", 00:13:20.459 "method": "nvmf_subsystem_add_host", 00:13:20.459 "req_id": 1 00:13:20.459 } 00:13:20.459 Got JSON-RPC error response 00:13:20.459 response: 00:13:20.459 { 00:13:20.459 "code": -32603, 00:13:20.459 "message": "Internal error" 00:13:20.459 } 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 72977 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 72977 ']' 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 72977 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72977 00:13:20.459 killing process with pid 72977 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72977' 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 72977 00:13:20.459 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 72977 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.vFu5WQSIs8 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73034 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73034 00:13:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73034 ']' 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.718 21:26:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.718 [2024-07-15 21:26:53.908994] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:20.718 [2024-07-15 21:26:53.909184] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.718 [2024-07-15 21:26:54.051530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.977 [2024-07-15 21:26:54.138108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.977 [2024-07-15 21:26:54.138318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.977 [2024-07-15 21:26:54.138334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.977 [2024-07-15 21:26:54.138342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.977 [2024-07-15 21:26:54.138349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.977 [2024-07-15 21:26:54.138378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.977 [2024-07-15 21:26:54.180369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.vFu5WQSIs8 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vFu5WQSIs8 00:13:21.546 21:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:21.805 [2024-07-15 21:26:54.966769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.805 21:26:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:22.064 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:22.064 [2024-07-15 21:26:55.366187] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:22.064 [2024-07-15 21:26:55.366373] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.064 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:22.322 malloc0 00:13:22.322 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vFu5WQSIs8 00:13:22.581 [2024-07-15 21:26:55.906347] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73083 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73083 /var/tmp/bdevperf.sock 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73083 ']' 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.581 21:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.840 [2024-07-15 21:26:55.973543] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:22.840 [2024-07-15 21:26:55.973748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73083 ] 00:13:22.840 [2024-07-15 21:26:56.113703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.840 [2024-07-15 21:26:56.195782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.099 [2024-07-15 21:26:56.237979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:23.667 21:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.667 21:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:23.667 21:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vFu5WQSIs8 00:13:23.667 [2024-07-15 21:26:56.939422] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.667 [2024-07-15 21:26:56.939700] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:23.667 TLSTESTn1 00:13:23.667 21:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:24.234 21:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:24.234 "subsystems": [ 00:13:24.234 { 00:13:24.234 "subsystem": "keyring", 00:13:24.234 "config": [] 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "subsystem": "iobuf", 00:13:24.234 "config": [ 00:13:24.234 { 00:13:24.234 "method": "iobuf_set_options", 00:13:24.234 "params": { 00:13:24.234 "small_pool_count": 8192, 00:13:24.234 "large_pool_count": 1024, 00:13:24.234 "small_bufsize": 8192, 00:13:24.234 "large_bufsize": 135168 00:13:24.234 } 00:13:24.234 } 00:13:24.234 ] 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "subsystem": "sock", 00:13:24.234 "config": [ 00:13:24.234 { 00:13:24.234 "method": "sock_set_default_impl", 00:13:24.234 "params": { 00:13:24.234 "impl_name": "uring" 00:13:24.234 } 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "method": "sock_impl_set_options", 00:13:24.234 "params": { 00:13:24.234 "impl_name": "ssl", 00:13:24.234 "recv_buf_size": 4096, 00:13:24.234 "send_buf_size": 4096, 00:13:24.234 "enable_recv_pipe": true, 00:13:24.234 "enable_quickack": false, 00:13:24.234 "enable_placement_id": 0, 00:13:24.234 "enable_zerocopy_send_server": true, 00:13:24.234 "enable_zerocopy_send_client": false, 00:13:24.234 "zerocopy_threshold": 0, 00:13:24.234 "tls_version": 0, 00:13:24.234 "enable_ktls": false 00:13:24.234 } 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "method": "sock_impl_set_options", 00:13:24.234 "params": { 00:13:24.234 "impl_name": "posix", 00:13:24.234 "recv_buf_size": 2097152, 00:13:24.234 "send_buf_size": 2097152, 00:13:24.234 "enable_recv_pipe": true, 00:13:24.234 "enable_quickack": false, 00:13:24.234 "enable_placement_id": 0, 00:13:24.234 "enable_zerocopy_send_server": true, 00:13:24.234 "enable_zerocopy_send_client": false, 00:13:24.234 "zerocopy_threshold": 0, 00:13:24.234 "tls_version": 0, 00:13:24.234 "enable_ktls": false 00:13:24.234 } 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "method": "sock_impl_set_options", 00:13:24.234 "params": { 00:13:24.234 "impl_name": "uring", 00:13:24.234 "recv_buf_size": 2097152, 00:13:24.234 "send_buf_size": 2097152, 00:13:24.234 "enable_recv_pipe": true, 00:13:24.234 "enable_quickack": false, 00:13:24.234 "enable_placement_id": 0, 00:13:24.234 "enable_zerocopy_send_server": false, 00:13:24.234 "enable_zerocopy_send_client": false, 00:13:24.234 "zerocopy_threshold": 0, 00:13:24.234 "tls_version": 0, 00:13:24.234 "enable_ktls": false 00:13:24.234 } 00:13:24.234 } 00:13:24.234 ] 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "subsystem": "vmd", 00:13:24.234 "config": [] 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "subsystem": "accel", 00:13:24.234 "config": [ 00:13:24.234 { 00:13:24.234 "method": "accel_set_options", 00:13:24.234 "params": { 00:13:24.234 "small_cache_size": 128, 00:13:24.234 "large_cache_size": 16, 00:13:24.234 "task_count": 2048, 00:13:24.234 "sequence_count": 2048, 00:13:24.234 "buf_count": 2048 00:13:24.234 } 00:13:24.234 } 00:13:24.234 ] 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "subsystem": "bdev", 00:13:24.234 "config": [ 00:13:24.234 { 00:13:24.234 "method": "bdev_set_options", 00:13:24.234 "params": { 00:13:24.234 "bdev_io_pool_size": 65535, 00:13:24.234 "bdev_io_cache_size": 256, 00:13:24.234 "bdev_auto_examine": true, 00:13:24.234 "iobuf_small_cache_size": 128, 00:13:24.234 "iobuf_large_cache_size": 16 00:13:24.234 } 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "method": "bdev_raid_set_options", 00:13:24.234 "params": { 00:13:24.234 "process_window_size_kb": 1024 00:13:24.234 } 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "method": "bdev_iscsi_set_options", 00:13:24.234 "params": { 00:13:24.234 "timeout_sec": 30 00:13:24.234 } 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "method": "bdev_nvme_set_options", 00:13:24.234 "params": { 00:13:24.234 "action_on_timeout": "none", 00:13:24.234 "timeout_us": 0, 00:13:24.234 "timeout_admin_us": 0, 00:13:24.234 "keep_alive_timeout_ms": 10000, 00:13:24.234 "arbitration_burst": 0, 00:13:24.234 "low_priority_weight": 0, 00:13:24.234 "medium_priority_weight": 0, 00:13:24.234 "high_priority_weight": 0, 00:13:24.234 "nvme_adminq_poll_period_us": 10000, 00:13:24.234 "nvme_ioq_poll_period_us": 0, 00:13:24.234 "io_queue_requests": 0, 00:13:24.234 "delay_cmd_submit": true, 00:13:24.234 "transport_retry_count": 4, 00:13:24.234 "bdev_retry_count": 3, 00:13:24.234 "transport_ack_timeout": 0, 00:13:24.234 "ctrlr_loss_timeout_sec": 0, 00:13:24.234 "reconnect_delay_sec": 0, 00:13:24.234 "fast_io_fail_timeout_sec": 0, 00:13:24.234 "disable_auto_failback": false, 00:13:24.234 "generate_uuids": false, 00:13:24.234 "transport_tos": 0, 00:13:24.234 "nvme_error_stat": false, 00:13:24.234 "rdma_srq_size": 0, 00:13:24.234 "io_path_stat": false, 00:13:24.234 "allow_accel_sequence": false, 00:13:24.234 "rdma_max_cq_size": 0, 00:13:24.234 "rdma_cm_event_timeout_ms": 0, 00:13:24.234 "dhchap_digests": [ 00:13:24.234 "sha256", 00:13:24.234 "sha384", 00:13:24.234 "sha512" 00:13:24.234 ], 00:13:24.234 "dhchap_dhgroups": [ 00:13:24.234 "null", 00:13:24.234 "ffdhe2048", 00:13:24.234 "ffdhe3072", 00:13:24.234 "ffdhe4096", 00:13:24.234 "ffdhe6144", 00:13:24.234 "ffdhe8192" 00:13:24.234 ] 00:13:24.234 } 00:13:24.234 }, 00:13:24.234 { 00:13:24.234 "method": "bdev_nvme_set_hotplug", 00:13:24.235 "params": { 00:13:24.235 "period_us": 100000, 00:13:24.235 "enable": false 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "bdev_malloc_create", 00:13:24.235 "params": { 00:13:24.235 "name": "malloc0", 00:13:24.235 "num_blocks": 8192, 00:13:24.235 "block_size": 4096, 00:13:24.235 "physical_block_size": 4096, 00:13:24.235 "uuid": "93da6886-e55d-493c-8570-e6e37757357a", 00:13:24.235 "optimal_io_boundary": 0 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "bdev_wait_for_examine" 00:13:24.235 } 00:13:24.235 ] 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "subsystem": "nbd", 00:13:24.235 "config": [] 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "subsystem": "scheduler", 00:13:24.235 "config": [ 00:13:24.235 { 00:13:24.235 "method": "framework_set_scheduler", 00:13:24.235 "params": { 00:13:24.235 "name": "static" 00:13:24.235 } 00:13:24.235 } 00:13:24.235 ] 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "subsystem": "nvmf", 00:13:24.235 "config": [ 00:13:24.235 { 00:13:24.235 "method": "nvmf_set_config", 00:13:24.235 "params": { 00:13:24.235 "discovery_filter": "match_any", 00:13:24.235 "admin_cmd_passthru": { 00:13:24.235 "identify_ctrlr": false 00:13:24.235 } 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "nvmf_set_max_subsystems", 00:13:24.235 "params": { 00:13:24.235 "max_subsystems": 1024 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "nvmf_set_crdt", 00:13:24.235 "params": { 00:13:24.235 "crdt1": 0, 00:13:24.235 "crdt2": 0, 00:13:24.235 "crdt3": 0 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "nvmf_create_transport", 00:13:24.235 "params": { 00:13:24.235 "trtype": "TCP", 00:13:24.235 "max_queue_depth": 128, 00:13:24.235 "max_io_qpairs_per_ctrlr": 127, 00:13:24.235 "in_capsule_data_size": 4096, 00:13:24.235 "max_io_size": 131072, 00:13:24.235 "io_unit_size": 131072, 00:13:24.235 "max_aq_depth": 128, 00:13:24.235 "num_shared_buffers": 511, 00:13:24.235 "buf_cache_size": 4294967295, 00:13:24.235 "dif_insert_or_strip": false, 00:13:24.235 "zcopy": false, 00:13:24.235 "c2h_success": false, 00:13:24.235 "sock_priority": 0, 00:13:24.235 "abort_timeout_sec": 1, 00:13:24.235 "ack_timeout": 0, 00:13:24.235 "data_wr_pool_size": 0 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "nvmf_create_subsystem", 00:13:24.235 "params": { 00:13:24.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.235 "allow_any_host": false, 00:13:24.235 "serial_number": "SPDK00000000000001", 00:13:24.235 "model_number": "SPDK bdev Controller", 00:13:24.235 "max_namespaces": 10, 00:13:24.235 "min_cntlid": 1, 00:13:24.235 "max_cntlid": 65519, 00:13:24.235 "ana_reporting": false 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "nvmf_subsystem_add_host", 00:13:24.235 "params": { 00:13:24.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.235 "host": "nqn.2016-06.io.spdk:host1", 00:13:24.235 "psk": "/tmp/tmp.vFu5WQSIs8" 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "nvmf_subsystem_add_ns", 00:13:24.235 "params": { 00:13:24.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.235 "namespace": { 00:13:24.235 "nsid": 1, 00:13:24.235 "bdev_name": "malloc0", 00:13:24.235 "nguid": "93DA6886E55D493C8570E6E37757357A", 00:13:24.235 "uuid": "93da6886-e55d-493c-8570-e6e37757357a", 00:13:24.235 "no_auto_visible": false 00:13:24.235 } 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "nvmf_subsystem_add_listener", 00:13:24.235 "params": { 00:13:24.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.235 "listen_address": { 00:13:24.235 "trtype": "TCP", 00:13:24.235 "adrfam": "IPv4", 00:13:24.235 "traddr": "10.0.0.2", 00:13:24.235 "trsvcid": "4420" 00:13:24.235 }, 00:13:24.235 "secure_channel": true 00:13:24.235 } 00:13:24.235 } 00:13:24.235 ] 00:13:24.235 } 00:13:24.235 ] 00:13:24.235 }' 00:13:24.235 21:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:24.235 21:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:24.235 "subsystems": [ 00:13:24.235 { 00:13:24.235 "subsystem": "keyring", 00:13:24.235 "config": [] 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "subsystem": "iobuf", 00:13:24.235 "config": [ 00:13:24.235 { 00:13:24.235 "method": "iobuf_set_options", 00:13:24.235 "params": { 00:13:24.235 "small_pool_count": 8192, 00:13:24.235 "large_pool_count": 1024, 00:13:24.235 "small_bufsize": 8192, 00:13:24.235 "large_bufsize": 135168 00:13:24.235 } 00:13:24.235 } 00:13:24.235 ] 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "subsystem": "sock", 00:13:24.235 "config": [ 00:13:24.235 { 00:13:24.235 "method": "sock_set_default_impl", 00:13:24.235 "params": { 00:13:24.235 "impl_name": "uring" 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "sock_impl_set_options", 00:13:24.235 "params": { 00:13:24.235 "impl_name": "ssl", 00:13:24.235 "recv_buf_size": 4096, 00:13:24.235 "send_buf_size": 4096, 00:13:24.235 "enable_recv_pipe": true, 00:13:24.235 "enable_quickack": false, 00:13:24.235 "enable_placement_id": 0, 00:13:24.235 "enable_zerocopy_send_server": true, 00:13:24.235 "enable_zerocopy_send_client": false, 00:13:24.235 "zerocopy_threshold": 0, 00:13:24.235 "tls_version": 0, 00:13:24.235 "enable_ktls": false 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "sock_impl_set_options", 00:13:24.235 "params": { 00:13:24.235 "impl_name": "posix", 00:13:24.235 "recv_buf_size": 2097152, 00:13:24.235 "send_buf_size": 2097152, 00:13:24.235 "enable_recv_pipe": true, 00:13:24.235 "enable_quickack": false, 00:13:24.235 "enable_placement_id": 0, 00:13:24.235 "enable_zerocopy_send_server": true, 00:13:24.235 "enable_zerocopy_send_client": false, 00:13:24.235 "zerocopy_threshold": 0, 00:13:24.235 "tls_version": 0, 00:13:24.235 "enable_ktls": false 00:13:24.235 } 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "method": "sock_impl_set_options", 00:13:24.235 "params": { 00:13:24.235 "impl_name": "uring", 00:13:24.235 "recv_buf_size": 2097152, 00:13:24.235 "send_buf_size": 2097152, 00:13:24.235 "enable_recv_pipe": true, 00:13:24.235 "enable_quickack": false, 00:13:24.235 "enable_placement_id": 0, 00:13:24.235 "enable_zerocopy_send_server": false, 00:13:24.235 "enable_zerocopy_send_client": false, 00:13:24.235 "zerocopy_threshold": 0, 00:13:24.235 "tls_version": 0, 00:13:24.235 "enable_ktls": false 00:13:24.235 } 00:13:24.235 } 00:13:24.235 ] 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "subsystem": "vmd", 00:13:24.235 "config": [] 00:13:24.235 }, 00:13:24.235 { 00:13:24.235 "subsystem": "accel", 00:13:24.235 "config": [ 00:13:24.235 { 00:13:24.235 "method": "accel_set_options", 00:13:24.235 "params": { 00:13:24.235 "small_cache_size": 128, 00:13:24.235 "large_cache_size": 16, 00:13:24.235 "task_count": 2048, 00:13:24.235 "sequence_count": 2048, 00:13:24.235 "buf_count": 2048 00:13:24.236 } 00:13:24.236 } 00:13:24.236 ] 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "subsystem": "bdev", 00:13:24.236 "config": [ 00:13:24.236 { 00:13:24.236 "method": "bdev_set_options", 00:13:24.236 "params": { 00:13:24.236 "bdev_io_pool_size": 65535, 00:13:24.236 "bdev_io_cache_size": 256, 00:13:24.236 "bdev_auto_examine": true, 00:13:24.236 "iobuf_small_cache_size": 128, 00:13:24.236 "iobuf_large_cache_size": 16 00:13:24.236 } 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "method": "bdev_raid_set_options", 00:13:24.236 "params": { 00:13:24.236 "process_window_size_kb": 1024 00:13:24.236 } 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "method": "bdev_iscsi_set_options", 00:13:24.236 "params": { 00:13:24.236 "timeout_sec": 30 00:13:24.236 } 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "method": "bdev_nvme_set_options", 00:13:24.236 "params": { 00:13:24.236 "action_on_timeout": "none", 00:13:24.236 "timeout_us": 0, 00:13:24.236 "timeout_admin_us": 0, 00:13:24.236 "keep_alive_timeout_ms": 10000, 00:13:24.236 "arbitration_burst": 0, 00:13:24.236 "low_priority_weight": 0, 00:13:24.236 "medium_priority_weight": 0, 00:13:24.236 "high_priority_weight": 0, 00:13:24.236 "nvme_adminq_poll_period_us": 10000, 00:13:24.236 "nvme_ioq_poll_period_us": 0, 00:13:24.236 "io_queue_requests": 512, 00:13:24.236 "delay_cmd_submit": true, 00:13:24.236 "transport_retry_count": 4, 00:13:24.236 "bdev_retry_count": 3, 00:13:24.236 "transport_ack_timeout": 0, 00:13:24.236 "ctrlr_loss_timeout_sec": 0, 00:13:24.236 "reconnect_delay_sec": 0, 00:13:24.236 "fast_io_fail_timeout_sec": 0, 00:13:24.236 "disable_auto_failback": false, 00:13:24.236 "generate_uuids": false, 00:13:24.236 "transport_tos": 0, 00:13:24.236 "nvme_error_stat": false, 00:13:24.236 "rdma_srq_size": 0, 00:13:24.236 "io_path_stat": false, 00:13:24.236 "allow_accel_sequence": false, 00:13:24.236 "rdma_max_cq_size": 0, 00:13:24.236 "rdma_cm_event_timeout_ms": 0, 00:13:24.236 "dhchap_digests": [ 00:13:24.236 "sha256", 00:13:24.236 "sha384", 00:13:24.236 "sha512" 00:13:24.236 ], 00:13:24.236 "dhchap_dhgroups": [ 00:13:24.236 "null", 00:13:24.236 "ffdhe2048", 00:13:24.236 "ffdhe3072", 00:13:24.236 "ffdhe4096", 00:13:24.236 "ffdhe6144", 00:13:24.236 "ffdhe8192" 00:13:24.236 ] 00:13:24.236 } 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "method": "bdev_nvme_attach_controller", 00:13:24.236 "params": { 00:13:24.236 "name": "TLSTEST", 00:13:24.236 "trtype": "TCP", 00:13:24.236 "adrfam": "IPv4", 00:13:24.236 "traddr": "10.0.0.2", 00:13:24.236 "trsvcid": "4420", 00:13:24.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.236 "prchk_reftag": false, 00:13:24.236 "prchk_guard": false, 00:13:24.236 "ctrlr_loss_timeout_sec": 0, 00:13:24.236 "reconnect_delay_sec": 0, 00:13:24.236 "fast_io_fail_timeout_sec": 0, 00:13:24.236 "psk": "/tmp/tmp.vFu5WQSIs8", 00:13:24.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.236 "hdgst": false, 00:13:24.236 "ddgst": false 00:13:24.236 } 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "method": "bdev_nvme_set_hotplug", 00:13:24.236 "params": { 00:13:24.236 "period_us": 100000, 00:13:24.236 "enable": false 00:13:24.236 } 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "method": "bdev_wait_for_examine" 00:13:24.236 } 00:13:24.236 ] 00:13:24.236 }, 00:13:24.236 { 00:13:24.236 "subsystem": "nbd", 00:13:24.236 "config": [] 00:13:24.236 } 00:13:24.236 ] 00:13:24.236 }' 00:13:24.236 21:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73083 00:13:24.236 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73083 ']' 00:13:24.236 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73083 00:13:24.236 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:24.236 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.236 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73083 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73083' 00:13:24.496 killing process with pid 73083 00:13:24.496 Received shutdown signal, test time was about 10.000000 seconds 00:13:24.496 00:13:24.496 Latency(us) 00:13:24.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.496 =================================================================================================================== 00:13:24.496 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73083 00:13:24.496 [2024-07-15 21:26:57.624168] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73083 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73034 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73034 ']' 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73034 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73034 00:13:24.496 killing process with pid 73034 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73034' 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73034 00:13:24.496 [2024-07-15 21:26:57.843725] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:24.496 21:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73034 00:13:24.755 21:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:24.755 21:26:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.755 21:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:24.755 "subsystems": [ 00:13:24.755 { 00:13:24.756 "subsystem": "keyring", 00:13:24.756 "config": [] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "iobuf", 00:13:24.756 "config": [ 00:13:24.756 { 00:13:24.756 "method": "iobuf_set_options", 00:13:24.756 "params": { 00:13:24.756 "small_pool_count": 8192, 00:13:24.756 "large_pool_count": 1024, 00:13:24.756 "small_bufsize": 8192, 00:13:24.756 "large_bufsize": 135168 00:13:24.756 } 00:13:24.756 } 00:13:24.756 ] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "sock", 00:13:24.756 "config": [ 00:13:24.756 { 00:13:24.756 "method": "sock_set_default_impl", 00:13:24.756 "params": { 00:13:24.756 "impl_name": "uring" 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "sock_impl_set_options", 00:13:24.756 "params": { 00:13:24.756 "impl_name": "ssl", 00:13:24.756 "recv_buf_size": 4096, 00:13:24.756 "send_buf_size": 4096, 00:13:24.756 "enable_recv_pipe": true, 00:13:24.756 "enable_quickack": false, 00:13:24.756 "enable_placement_id": 0, 00:13:24.756 "enable_zerocopy_send_server": true, 00:13:24.756 "enable_zerocopy_send_client": false, 00:13:24.756 "zerocopy_threshold": 0, 00:13:24.756 "tls_version": 0, 00:13:24.756 "enable_ktls": false 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "sock_impl_set_options", 00:13:24.756 "params": { 00:13:24.756 "impl_name": "posix", 00:13:24.756 "recv_buf_size": 2097152, 00:13:24.756 "send_buf_size": 2097152, 00:13:24.756 "enable_recv_pipe": true, 00:13:24.756 "enable_quickack": false, 00:13:24.756 "enable_placement_id": 0, 00:13:24.756 "enable_zerocopy_send_server": true, 00:13:24.756 "enable_zerocopy_send_client": false, 00:13:24.756 "zerocopy_threshold": 0, 00:13:24.756 "tls_version": 0, 00:13:24.756 "enable_ktls": false 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "sock_impl_set_options", 00:13:24.756 "params": { 00:13:24.756 "impl_name": "uring", 00:13:24.756 "recv_buf_size": 2097152, 00:13:24.756 "send_buf_size": 2097152, 00:13:24.756 "enable_recv_pipe": true, 00:13:24.756 "enable_quickack": false, 00:13:24.756 "enable_placement_id": 0, 00:13:24.756 "enable_zerocopy_send_server": false, 00:13:24.756 "enable_zerocopy_send_client": false, 00:13:24.756 "zerocopy_threshold": 0, 00:13:24.756 "tls_version": 0, 00:13:24.756 "enable_ktls": false 00:13:24.756 } 00:13:24.756 } 00:13:24.756 ] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "vmd", 00:13:24.756 "config": [] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "accel", 00:13:24.756 "config": [ 00:13:24.756 { 00:13:24.756 "method": "accel_set_options", 00:13:24.756 "params": { 00:13:24.756 "small_cache_size": 128, 00:13:24.756 "large_cache_size": 16, 00:13:24.756 "task_count": 2048, 00:13:24.756 "sequence_count": 2048, 00:13:24.756 "buf_count": 2048 00:13:24.756 } 00:13:24.756 } 00:13:24.756 ] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "bdev", 00:13:24.756 "config": [ 00:13:24.756 { 00:13:24.756 "method": "bdev_set_options", 00:13:24.756 "params": { 00:13:24.756 "bdev_io_pool_size": 65535, 00:13:24.756 "bdev_io_cache_size": 256, 00:13:24.756 "bdev_auto_examine": true, 00:13:24.756 "iobuf_small_cache_size": 128, 00:13:24.756 "iobuf_large_cache_size": 16 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "bdev_raid_set_options", 00:13:24.756 "params": { 00:13:24.756 "process_window_size_kb": 1024 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "bdev_iscsi_set_options", 00:13:24.756 "params": { 00:13:24.756 "timeout_sec": 30 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "bdev_nvme_set_options", 00:13:24.756 "params": { 00:13:24.756 "action_on_timeout": "none", 00:13:24.756 "timeout_us": 0, 00:13:24.756 "timeout_admin_us": 0, 00:13:24.756 "keep_alive_timeout_ms": 10000, 00:13:24.756 "arbitration_burst": 0, 00:13:24.756 "low_priority_weight": 0, 00:13:24.756 "medium_priority_weight": 0, 00:13:24.756 "high_priority_weight": 0, 00:13:24.756 "nvme_adminq_poll_period_us": 10000, 00:13:24.756 "nvme_ioq_poll_period_us": 0, 00:13:24.756 "io_queue_requests": 0, 00:13:24.756 "delay_cmd_submit": true, 00:13:24.756 "transport_retry_count": 4, 00:13:24.756 "bdev_retry_count": 3, 00:13:24.756 "transport_ack_timeout": 0, 00:13:24.756 "ctrlr_loss_timeout_sec": 0, 00:13:24.756 "reconnect_delay_sec": 0, 00:13:24.756 "fast_io_fail_timeout_sec": 0, 00:13:24.756 "disable_auto_failback": false, 00:13:24.756 "generate_uuids": false, 00:13:24.756 "transport_tos": 0, 00:13:24.756 "nvme_error_stat": false, 00:13:24.756 "rdma_srq_size": 0, 00:13:24.756 "io_path_stat": false, 00:13:24.756 "allow_accel_sequence": false, 00:13:24.756 "rdma_max_cq_size": 0, 00:13:24.756 "rdma_cm_event_timeout_ms": 0, 00:13:24.756 "dhchap_digests": [ 00:13:24.756 "sha256", 00:13:24.756 "sha384", 00:13:24.756 "sha512" 00:13:24.756 ], 00:13:24.756 "dhchap_dhgroups": [ 00:13:24.756 "null", 00:13:24.756 "ffdhe2048", 00:13:24.756 "ffdhe3072", 00:13:24.756 "ffdhe4096", 00:13:24.756 "ffdhe6144", 00:13:24.756 "ffdhe8192" 00:13:24.756 ] 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "bdev_nvme_set_hotplug", 00:13:24.756 "params": { 00:13:24.756 "period_us": 100000, 00:13:24.756 "enable": false 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "bdev_malloc_create", 00:13:24.756 "params": { 00:13:24.756 "name": "malloc0", 00:13:24.756 "num_blocks": 8192, 00:13:24.756 "block_size": 4096, 00:13:24.756 "physical_block_size": 4096, 00:13:24.756 "uuid": "93da6886-e55d-493c-8570-e6e37757357a", 00:13:24.756 "optimal_io_boundary": 0 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "bdev_wait_for_examine" 00:13:24.756 } 00:13:24.756 ] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "nbd", 00:13:24.756 "config": [] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "scheduler", 00:13:24.756 "config": [ 00:13:24.756 { 00:13:24.756 "method": "framework_set_scheduler", 00:13:24.756 "params": { 00:13:24.756 "name": "static" 00:13:24.756 } 00:13:24.756 } 00:13:24.756 ] 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "subsystem": "nvmf", 00:13:24.756 "config": [ 00:13:24.756 { 00:13:24.756 "method": "nvmf_set_config", 00:13:24.756 "params": { 00:13:24.756 "discovery_filter": "match_any", 00:13:24.756 "admin_cmd_passthru": { 00:13:24.756 "identify_ctrlr": false 00:13:24.756 } 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "nvmf_set_max_subsystems", 00:13:24.756 "params": { 00:13:24.756 "max_subsystems": 1024 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "nvmf_set_crdt", 00:13:24.756 "params": { 00:13:24.756 "crdt1": 0, 00:13:24.756 "crdt2": 0, 00:13:24.756 "crdt3": 0 00:13:24.756 } 00:13:24.756 }, 00:13:24.756 { 00:13:24.756 "method": "nvmf_create_transport", 00:13:24.756 "params": { 00:13:24.756 "trtype": "TCP", 00:13:24.756 "max_queue_depth": 128, 00:13:24.756 "max_io_qpairs_per_ctrlr": 127, 00:13:24.756 "in_capsule_data_size": 4096, 00:13:24.756 "max_io_size": 131072, 00:13:24.756 "io_unit_size": 131072, 00:13:24.756 "max_aq_depth": 128, 00:13:24.756 "num_shared_buffers": 511, 00:13:24.756 "buf_cache_size": 4294967295, 00:13:24.756 "dif_insert_or_strip": false, 00:13:24.757 "zcopy": false, 00:13:24.757 "c2h_success": false, 00:13:24.757 "sock_priority": 0, 00:13:24.757 "abort_timeout_sec": 1, 00:13:24.757 "ack_timeout": 0, 00:13:24.757 "data_wr_pool_size": 0 00:13:24.757 } 00:13:24.757 }, 00:13:24.757 { 00:13:24.757 "method": "nvmf_create_subsystem", 00:13:24.757 "params": { 00:13:24.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.757 "allow_any_host": false, 00:13:24.757 "serial_number": "SPDK00000000000001", 00:13:24.757 "model_number": "SPDK bdev Controller", 00:13:24.757 "max_namespaces": 10, 00:13:24.757 "min_cntlid": 1, 00:13:24.757 "max_cntlid": 65519, 00:13:24.757 "ana_reporting": false 00:13:24.757 } 00:13:24.757 }, 00:13:24.757 { 00:13:24.757 "method": "nvmf_subsystem_add_host", 00:13:24.757 "params": { 00:13:24.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.757 "host": "nqn.2016-06.io.spdk:host1", 00:13:24.757 "psk": "/tmp/tmp.vFu5WQSIs8" 00:13:24.757 } 00:13:24.757 }, 00:13:24.757 { 00:13:24.757 "method": "nvmf_subsystem_add_ns", 00:13:24.757 "params": { 00:13:24.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.757 "namespace": { 00:13:24.757 "nsid": 1, 00:13:24.757 "bdev_name": "malloc0", 00:13:24.757 "nguid": "93DA6886E55D493C8570E6E37757357A", 00:13:24.757 "uuid": "93da6886-e55d-493c-8570-e6e37757357a", 00:13:24.757 "no_auto_visible": false 00:13:24.757 } 00:13:24.757 } 00:13:24.757 }, 00:13:24.757 { 00:13:24.757 "method": "nvmf_subsystem_add_listener", 00:13:24.757 "params": { 00:13:24.757 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.757 "listen_address": { 00:13:24.757 "trtype": "TCP", 00:13:24.757 "adrfam": "IPv4", 00:13:24.757 "traddr": "10.0.0.2", 00:13:24.757 "trsvcid": "4420" 00:13:24.757 }, 00:13:24.757 "secure_channel": true 00:13:24.757 } 00:13:24.757 } 00:13:24.757 ] 00:13:24.757 } 00:13:24.757 ] 00:13:24.757 }' 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73126 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73126 00:13:24.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73126 ']' 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.757 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:24.757 [2024-07-15 21:26:58.112088] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:24.757 [2024-07-15 21:26:58.112679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.015 [2024-07-15 21:26:58.240703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.015 [2024-07-15 21:26:58.335673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.015 [2024-07-15 21:26:58.335891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.015 [2024-07-15 21:26:58.336069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.015 [2024-07-15 21:26:58.336123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.015 [2024-07-15 21:26:58.336196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.015 [2024-07-15 21:26:58.336304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.275 [2024-07-15 21:26:58.491482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.275 [2024-07-15 21:26:58.553901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.275 [2024-07-15 21:26:58.569805] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:25.275 [2024-07-15 21:26:58.585784] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:25.275 [2024-07-15 21:26:58.586092] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.841 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.841 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:25.841 21:26:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.841 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:25.841 21:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.841 21:26:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.841 21:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73158 00:13:25.841 21:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73158 /var/tmp/bdevperf.sock 00:13:25.841 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73158 ']' 00:13:25.841 21:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:25.841 21:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:25.841 "subsystems": [ 00:13:25.841 { 00:13:25.841 "subsystem": "keyring", 00:13:25.841 "config": [] 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "subsystem": "iobuf", 00:13:25.841 "config": [ 00:13:25.841 { 00:13:25.841 "method": "iobuf_set_options", 00:13:25.841 "params": { 00:13:25.841 "small_pool_count": 8192, 00:13:25.841 "large_pool_count": 1024, 00:13:25.841 "small_bufsize": 8192, 00:13:25.841 "large_bufsize": 135168 00:13:25.841 } 00:13:25.841 } 00:13:25.841 ] 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "subsystem": "sock", 00:13:25.841 "config": [ 00:13:25.841 { 00:13:25.841 "method": "sock_set_default_impl", 00:13:25.841 "params": { 00:13:25.841 "impl_name": "uring" 00:13:25.841 } 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "method": "sock_impl_set_options", 00:13:25.841 "params": { 00:13:25.841 "impl_name": "ssl", 00:13:25.841 "recv_buf_size": 4096, 00:13:25.841 "send_buf_size": 4096, 00:13:25.841 "enable_recv_pipe": true, 00:13:25.841 "enable_quickack": false, 00:13:25.841 "enable_placement_id": 0, 00:13:25.841 "enable_zerocopy_send_server": true, 00:13:25.841 "enable_zerocopy_send_client": false, 00:13:25.841 "zerocopy_threshold": 0, 00:13:25.841 "tls_version": 0, 00:13:25.841 "enable_ktls": false 00:13:25.841 } 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "method": "sock_impl_set_options", 00:13:25.841 "params": { 00:13:25.841 "impl_name": "posix", 00:13:25.841 "recv_buf_size": 2097152, 00:13:25.841 "send_buf_size": 2097152, 00:13:25.841 "enable_recv_pipe": true, 00:13:25.841 "enable_quickack": false, 00:13:25.841 "enable_placement_id": 0, 00:13:25.841 "enable_zerocopy_send_server": true, 00:13:25.841 "enable_zerocopy_send_client": false, 00:13:25.841 "zerocopy_threshold": 0, 00:13:25.841 "tls_version": 0, 00:13:25.841 "enable_ktls": false 00:13:25.841 } 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "method": "sock_impl_set_options", 00:13:25.841 "params": { 00:13:25.841 "impl_name": "uring", 00:13:25.841 "recv_buf_size": 2097152, 00:13:25.841 "send_buf_size": 2097152, 00:13:25.841 "enable_recv_pipe": true, 00:13:25.841 "enable_quickack": false, 00:13:25.841 "enable_placement_id": 0, 00:13:25.841 "enable_zerocopy_send_server": false, 00:13:25.841 "enable_zerocopy_send_client": false, 00:13:25.841 "zerocopy_threshold": 0, 00:13:25.841 "tls_version": 0, 00:13:25.841 "enable_ktls": false 00:13:25.841 } 00:13:25.841 } 00:13:25.841 ] 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "subsystem": "vmd", 00:13:25.841 "config": [] 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "subsystem": "accel", 00:13:25.841 "config": [ 00:13:25.841 { 00:13:25.841 "method": "accel_set_options", 00:13:25.841 "params": { 00:13:25.841 "small_cache_size": 128, 00:13:25.841 "large_cache_size": 16, 00:13:25.841 "task_count": 2048, 00:13:25.841 "sequence_count": 2048, 00:13:25.841 "buf_count": 2048 00:13:25.841 } 00:13:25.841 } 00:13:25.841 ] 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "subsystem": "bdev", 00:13:25.841 "config": [ 00:13:25.841 { 00:13:25.841 "method": "bdev_set_options", 00:13:25.841 "params": { 00:13:25.841 "bdev_io_pool_size": 65535, 00:13:25.841 "bdev_io_cache_size": 256, 00:13:25.841 "bdev_auto_examine": true, 00:13:25.841 "iobuf_small_cache_size": 128, 00:13:25.841 "iobuf_large_cache_size": 16 00:13:25.841 } 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "method": "bdev_raid_set_options", 00:13:25.841 "params": { 00:13:25.841 "process_window_size_kb": 1024 00:13:25.841 } 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "method": "bdev_iscsi_set_options", 00:13:25.841 "params": { 00:13:25.841 "timeout_sec": 30 00:13:25.841 } 00:13:25.841 }, 00:13:25.841 { 00:13:25.841 "method": "bdev_nvme_set_options", 00:13:25.841 "params": { 00:13:25.841 "action_on_timeout": "none", 00:13:25.841 "timeout_us": 0, 00:13:25.841 "timeout_admin_us": 0, 00:13:25.841 "keep_alive_timeout_ms": 10000, 00:13:25.841 "arbitration_burst": 0, 00:13:25.841 "low_priority_weight": 0, 00:13:25.841 "medium_priority_weight": 0, 00:13:25.841 "high_priority_weight": 0, 00:13:25.841 "nvme_adminq_poll_period_us": 10000, 00:13:25.841 "nvme_ioq_poll_period_us": 0, 00:13:25.841 "io_queue_requests": 512, 00:13:25.841 "delay_cmd_submit": true, 00:13:25.841 "transport_retry_count": 4, 00:13:25.841 "bdev_retry_count": 3, 00:13:25.841 "transport_ack_timeout": 0, 00:13:25.841 "ctrlr_loss_timeout_sec": 0, 00:13:25.841 "reconnect_delay_sec": 0, 00:13:25.841 "fast_io_fail_timeout_sec": 0, 00:13:25.841 "disable_auto_failback": false, 00:13:25.841 "generate_uuids": false, 00:13:25.841 "transport_tos": 0, 00:13:25.841 "nvme_error_stat": false, 00:13:25.841 "rdma_srq_size": 0, 00:13:25.841 "io_path_stat": false, 00:13:25.841 "allow_accel_sequence": false, 00:13:25.841 "rdma_max_cq_size": 0, 00:13:25.841 "rdma_cm_event_timeout_ms": 0, 00:13:25.842 "dhchap_digests": [ 00:13:25.842 "sha256", 00:13:25.842 "sha384", 00:13:25.842 "sha512" 00:13:25.842 ], 00:13:25.842 "dhchap_dhgroups": [ 00:13:25.842 "null", 00:13:25.842 "ffdhe2048", 00:13:25.842 "ffdhe3072", 00:13:25.842 "ffdhe4096", 00:13:25.842 "ffdhe6144", 00:13:25.842 "ffdhe8192" 00:13:25.842 ] 00:13:25.842 } 00:13:25.842 }, 00:13:25.842 { 00:13:25.842 "method": "bdev_nvme_attach_controller", 00:13:25.842 "params": { 00:13:25.842 "name": "TLSTEST", 00:13:25.842 "trtype": "TCP", 00:13:25.842 "adrfam": "IPv4", 00:13:25.842 "traddr": "10.0.0.2", 00:13:25.842 "trsvcid": "4420", 00:13:25.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.842 "prchk_reftag": false, 00:13:25.842 "prchk_guard": false, 00:13:25.842 "ctrlr_loss_timeout_sec": 0, 00:13:25.842 "reconnect_delay_sec": 0, 00:13:25.842 "fast_io_fail_timeout_sec": 0, 00:13:25.842 "psk": "/tmp/tmp.vFu5WQSIs8", 00:13:25.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:25.842 "hdgst": false, 00:13:25.842 "ddgst": false 00:13:25.842 } 00:13:25.842 }, 00:13:25.842 { 00:13:25.842 "method": "bdev_nvme_set_hotplug", 00:13:25.842 "params": { 00:13:25.842 "period_us": 100000, 00:13:25.842 "enable": false 00:13:25.842 } 00:13:25.842 }, 00:13:25.842 { 00:13:25.842 "method": "bdev_wait_for_examine" 00:13:25.842 } 00:13:25.842 ] 00:13:25.842 }, 00:13:25.842 { 00:13:25.842 "subsystem": "nbd", 00:13:25.842 "config": [] 00:13:25.842 } 00:13:25.842 ] 00:13:25.842 }' 00:13:25.842 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.842 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.842 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.842 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.842 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.842 [2024-07-15 21:26:59.108282] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:25.842 [2024-07-15 21:26:59.108515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73158 ] 00:13:26.099 [2024-07-15 21:26:59.250803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.099 [2024-07-15 21:26:59.334222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.099 [2024-07-15 21:26:59.456852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:26.357 [2024-07-15 21:26:59.491069] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.357 [2024-07-15 21:26:59.491172] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:26.635 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.635 21:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:26.635 21:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:26.893 Running I/O for 10 seconds... 00:13:36.886 00:13:36.886 Latency(us) 00:13:36.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.886 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:36.886 Verification LBA range: start 0x0 length 0x2000 00:13:36.886 TLSTESTn1 : 10.01 5289.33 20.66 0.00 0.00 24158.99 5316.58 33689.19 00:13:36.886 =================================================================================================================== 00:13:36.886 Total : 5289.33 20.66 0.00 0.00 24158.99 5316.58 33689.19 00:13:36.886 0 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73158 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73158 ']' 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73158 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73158 00:13:36.886 killing process with pid 73158 00:13:36.886 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.886 00:13:36.886 Latency(us) 00:13:36.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.886 =================================================================================================================== 00:13:36.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73158' 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73158 00:13:36.886 [2024-07-15 21:27:10.121599] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:36.886 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73158 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73126 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73126 ']' 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73126 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73126 00:13:37.145 killing process with pid 73126 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73126' 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73126 00:13:37.145 [2024-07-15 21:27:10.353939] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:37.145 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73126 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73291 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73291 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73291 ']' 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.408 21:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.408 [2024-07-15 21:27:10.614637] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:37.408 [2024-07-15 21:27:10.614706] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.408 [2024-07-15 21:27:10.756644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.667 [2024-07-15 21:27:10.841951] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.667 [2024-07-15 21:27:10.842000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.667 [2024-07-15 21:27:10.842010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.667 [2024-07-15 21:27:10.842018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.667 [2024-07-15 21:27:10.842025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.667 [2024-07-15 21:27:10.842049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.667 [2024-07-15 21:27:10.882731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.vFu5WQSIs8 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.vFu5WQSIs8 00:13:38.234 21:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:38.492 [2024-07-15 21:27:11.728591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.492 21:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:38.751 21:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:38.751 [2024-07-15 21:27:12.116015] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:38.751 [2024-07-15 21:27:12.116196] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.009 21:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:39.009 malloc0 00:13:39.009 21:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:39.268 21:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vFu5WQSIs8 00:13:39.527 [2024-07-15 21:27:12.656032] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73340 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73340 /var/tmp/bdevperf.sock 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73340 ']' 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.527 21:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.527 [2024-07-15 21:27:12.705529] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:39.527 [2024-07-15 21:27:12.705596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73340 ] 00:13:39.527 [2024-07-15 21:27:12.845052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.786 [2024-07-15 21:27:12.933251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.786 [2024-07-15 21:27:12.974080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.395 21:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.395 21:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:40.395 21:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vFu5WQSIs8 00:13:40.654 21:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:40.654 [2024-07-15 21:27:13.975018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:40.911 nvme0n1 00:13:40.911 21:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:40.911 Running I/O for 1 seconds... 00:13:41.845 00:13:41.845 Latency(us) 00:13:41.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.845 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:41.845 Verification LBA range: start 0x0 length 0x2000 00:13:41.845 nvme0n1 : 1.01 5875.67 22.95 0.00 0.00 21628.77 4526.98 18002.66 00:13:41.845 =================================================================================================================== 00:13:41.845 Total : 5875.67 22.95 0.00 0.00 21628.77 4526.98 18002.66 00:13:41.845 0 00:13:41.845 21:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 73340 00:13:41.845 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73340 ']' 00:13:41.845 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73340 00:13:41.845 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73340 00:13:42.104 killing process with pid 73340 00:13:42.104 Received shutdown signal, test time was about 1.000000 seconds 00:13:42.104 00:13:42.104 Latency(us) 00:13:42.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.104 =================================================================================================================== 00:13:42.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73340' 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73340 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73340 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73291 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73291 ']' 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73291 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.104 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73291 00:13:42.362 killing process with pid 73291 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73291' 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73291 00:13:42.362 [2024-07-15 21:27:15.474932] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73291 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:42.362 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73386 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73386 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73386 ']' 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.363 21:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.363 [2024-07-15 21:27:15.725594] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:42.363 [2024-07-15 21:27:15.725652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.621 [2024-07-15 21:27:15.868316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.621 [2024-07-15 21:27:15.946194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.621 [2024-07-15 21:27:15.946240] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.621 [2024-07-15 21:27:15.946250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.621 [2024-07-15 21:27:15.946258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.621 [2024-07-15 21:27:15.946265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.621 [2024-07-15 21:27:15.946289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.621 [2024-07-15 21:27:15.986853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.556 [2024-07-15 21:27:16.640131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.556 malloc0 00:13:43.556 [2024-07-15 21:27:16.668789] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:43.556 [2024-07-15 21:27:16.668955] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73418 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73418 /var/tmp/bdevperf.sock 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73418 ']' 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:43.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.556 21:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.556 [2024-07-15 21:27:16.747618] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:43.556 [2024-07-15 21:27:16.747676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73418 ] 00:13:43.556 [2024-07-15 21:27:16.887696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.825 [2024-07-15 21:27:16.976817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.825 [2024-07-15 21:27:17.018028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:44.392 21:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.392 21:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:44.392 21:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vFu5WQSIs8 00:13:44.652 21:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:44.652 [2024-07-15 21:27:17.938833] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:44.652 nvme0n1 00:13:44.911 21:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:44.911 Running I/O for 1 seconds... 00:13:45.849 00:13:45.849 Latency(us) 00:13:45.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.849 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:45.849 Verification LBA range: start 0x0 length 0x2000 00:13:45.849 nvme0n1 : 1.01 5810.16 22.70 0.00 0.00 21870.37 4474.35 17370.99 00:13:45.849 =================================================================================================================== 00:13:45.849 Total : 5810.16 22.70 0.00 0.00 21870.37 4474.35 17370.99 00:13:45.849 0 00:13:45.849 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:13:45.849 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.849 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.107 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.107 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:13:46.107 "subsystems": [ 00:13:46.107 { 00:13:46.107 "subsystem": "keyring", 00:13:46.107 "config": [ 00:13:46.107 { 00:13:46.107 "method": "keyring_file_add_key", 00:13:46.107 "params": { 00:13:46.107 "name": "key0", 00:13:46.107 "path": "/tmp/tmp.vFu5WQSIs8" 00:13:46.107 } 00:13:46.107 } 00:13:46.107 ] 00:13:46.107 }, 00:13:46.107 { 00:13:46.107 "subsystem": "iobuf", 00:13:46.107 "config": [ 00:13:46.107 { 00:13:46.107 "method": "iobuf_set_options", 00:13:46.107 "params": { 00:13:46.107 "small_pool_count": 8192, 00:13:46.107 "large_pool_count": 1024, 00:13:46.107 "small_bufsize": 8192, 00:13:46.107 "large_bufsize": 135168 00:13:46.108 } 00:13:46.108 } 00:13:46.108 ] 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "subsystem": "sock", 00:13:46.108 "config": [ 00:13:46.108 { 00:13:46.108 "method": "sock_set_default_impl", 00:13:46.108 "params": { 00:13:46.108 "impl_name": "uring" 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "sock_impl_set_options", 00:13:46.108 "params": { 00:13:46.108 "impl_name": "ssl", 00:13:46.108 "recv_buf_size": 4096, 00:13:46.108 "send_buf_size": 4096, 00:13:46.108 "enable_recv_pipe": true, 00:13:46.108 "enable_quickack": false, 00:13:46.108 "enable_placement_id": 0, 00:13:46.108 "enable_zerocopy_send_server": true, 00:13:46.108 "enable_zerocopy_send_client": false, 00:13:46.108 "zerocopy_threshold": 0, 00:13:46.108 "tls_version": 0, 00:13:46.108 "enable_ktls": false 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "sock_impl_set_options", 00:13:46.108 "params": { 00:13:46.108 "impl_name": "posix", 00:13:46.108 "recv_buf_size": 2097152, 00:13:46.108 "send_buf_size": 2097152, 00:13:46.108 "enable_recv_pipe": true, 00:13:46.108 "enable_quickack": false, 00:13:46.108 "enable_placement_id": 0, 00:13:46.108 "enable_zerocopy_send_server": true, 00:13:46.108 "enable_zerocopy_send_client": false, 00:13:46.108 "zerocopy_threshold": 0, 00:13:46.108 "tls_version": 0, 00:13:46.108 "enable_ktls": false 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "sock_impl_set_options", 00:13:46.108 "params": { 00:13:46.108 "impl_name": "uring", 00:13:46.108 "recv_buf_size": 2097152, 00:13:46.108 "send_buf_size": 2097152, 00:13:46.108 "enable_recv_pipe": true, 00:13:46.108 "enable_quickack": false, 00:13:46.108 "enable_placement_id": 0, 00:13:46.108 "enable_zerocopy_send_server": false, 00:13:46.108 "enable_zerocopy_send_client": false, 00:13:46.108 "zerocopy_threshold": 0, 00:13:46.108 "tls_version": 0, 00:13:46.108 "enable_ktls": false 00:13:46.108 } 00:13:46.108 } 00:13:46.108 ] 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "subsystem": "vmd", 00:13:46.108 "config": [] 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "subsystem": "accel", 00:13:46.108 "config": [ 00:13:46.108 { 00:13:46.108 "method": "accel_set_options", 00:13:46.108 "params": { 00:13:46.108 "small_cache_size": 128, 00:13:46.108 "large_cache_size": 16, 00:13:46.108 "task_count": 2048, 00:13:46.108 "sequence_count": 2048, 00:13:46.108 "buf_count": 2048 00:13:46.108 } 00:13:46.108 } 00:13:46.108 ] 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "subsystem": "bdev", 00:13:46.108 "config": [ 00:13:46.108 { 00:13:46.108 "method": "bdev_set_options", 00:13:46.108 "params": { 00:13:46.108 "bdev_io_pool_size": 65535, 00:13:46.108 "bdev_io_cache_size": 256, 00:13:46.108 "bdev_auto_examine": true, 00:13:46.108 "iobuf_small_cache_size": 128, 00:13:46.108 "iobuf_large_cache_size": 16 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "bdev_raid_set_options", 00:13:46.108 "params": { 00:13:46.108 "process_window_size_kb": 1024 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "bdev_iscsi_set_options", 00:13:46.108 "params": { 00:13:46.108 "timeout_sec": 30 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "bdev_nvme_set_options", 00:13:46.108 "params": { 00:13:46.108 "action_on_timeout": "none", 00:13:46.108 "timeout_us": 0, 00:13:46.108 "timeout_admin_us": 0, 00:13:46.108 "keep_alive_timeout_ms": 10000, 00:13:46.108 "arbitration_burst": 0, 00:13:46.108 "low_priority_weight": 0, 00:13:46.108 "medium_priority_weight": 0, 00:13:46.108 "high_priority_weight": 0, 00:13:46.108 "nvme_adminq_poll_period_us": 10000, 00:13:46.108 "nvme_ioq_poll_period_us": 0, 00:13:46.108 "io_queue_requests": 0, 00:13:46.108 "delay_cmd_submit": true, 00:13:46.108 "transport_retry_count": 4, 00:13:46.108 "bdev_retry_count": 3, 00:13:46.108 "transport_ack_timeout": 0, 00:13:46.108 "ctrlr_loss_timeout_sec": 0, 00:13:46.108 "reconnect_delay_sec": 0, 00:13:46.108 "fast_io_fail_timeout_sec": 0, 00:13:46.108 "disable_auto_failback": false, 00:13:46.108 "generate_uuids": false, 00:13:46.108 "transport_tos": 0, 00:13:46.108 "nvme_error_stat": false, 00:13:46.108 "rdma_srq_size": 0, 00:13:46.108 "io_path_stat": false, 00:13:46.108 "allow_accel_sequence": false, 00:13:46.108 "rdma_max_cq_size": 0, 00:13:46.108 "rdma_cm_event_timeout_ms": 0, 00:13:46.108 "dhchap_digests": [ 00:13:46.108 "sha256", 00:13:46.108 "sha384", 00:13:46.108 "sha512" 00:13:46.108 ], 00:13:46.108 "dhchap_dhgroups": [ 00:13:46.108 "null", 00:13:46.108 "ffdhe2048", 00:13:46.108 "ffdhe3072", 00:13:46.108 "ffdhe4096", 00:13:46.108 "ffdhe6144", 00:13:46.108 "ffdhe8192" 00:13:46.108 ] 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "bdev_nvme_set_hotplug", 00:13:46.108 "params": { 00:13:46.108 "period_us": 100000, 00:13:46.108 "enable": false 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "bdev_malloc_create", 00:13:46.108 "params": { 00:13:46.108 "name": "malloc0", 00:13:46.108 "num_blocks": 8192, 00:13:46.108 "block_size": 4096, 00:13:46.108 "physical_block_size": 4096, 00:13:46.108 "uuid": "cfd4d1f1-7c77-4735-bd8f-178401a7dd00", 00:13:46.108 "optimal_io_boundary": 0 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "bdev_wait_for_examine" 00:13:46.108 } 00:13:46.108 ] 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "subsystem": "nbd", 00:13:46.108 "config": [] 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "subsystem": "scheduler", 00:13:46.108 "config": [ 00:13:46.108 { 00:13:46.108 "method": "framework_set_scheduler", 00:13:46.108 "params": { 00:13:46.108 "name": "static" 00:13:46.108 } 00:13:46.108 } 00:13:46.108 ] 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "subsystem": "nvmf", 00:13:46.108 "config": [ 00:13:46.108 { 00:13:46.108 "method": "nvmf_set_config", 00:13:46.108 "params": { 00:13:46.108 "discovery_filter": "match_any", 00:13:46.108 "admin_cmd_passthru": { 00:13:46.108 "identify_ctrlr": false 00:13:46.108 } 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "nvmf_set_max_subsystems", 00:13:46.108 "params": { 00:13:46.108 "max_subsystems": 1024 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "nvmf_set_crdt", 00:13:46.108 "params": { 00:13:46.108 "crdt1": 0, 00:13:46.108 "crdt2": 0, 00:13:46.108 "crdt3": 0 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "nvmf_create_transport", 00:13:46.108 "params": { 00:13:46.108 "trtype": "TCP", 00:13:46.108 "max_queue_depth": 128, 00:13:46.108 "max_io_qpairs_per_ctrlr": 127, 00:13:46.108 "in_capsule_data_size": 4096, 00:13:46.108 "max_io_size": 131072, 00:13:46.108 "io_unit_size": 131072, 00:13:46.108 "max_aq_depth": 128, 00:13:46.108 "num_shared_buffers": 511, 00:13:46.108 "buf_cache_size": 4294967295, 00:13:46.108 "dif_insert_or_strip": false, 00:13:46.108 "zcopy": false, 00:13:46.108 "c2h_success": false, 00:13:46.108 "sock_priority": 0, 00:13:46.108 "abort_timeout_sec": 1, 00:13:46.108 "ack_timeout": 0, 00:13:46.108 "data_wr_pool_size": 0 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "nvmf_create_subsystem", 00:13:46.108 "params": { 00:13:46.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.108 "allow_any_host": false, 00:13:46.108 "serial_number": "00000000000000000000", 00:13:46.108 "model_number": "SPDK bdev Controller", 00:13:46.108 "max_namespaces": 32, 00:13:46.108 "min_cntlid": 1, 00:13:46.108 "max_cntlid": 65519, 00:13:46.108 "ana_reporting": false 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "nvmf_subsystem_add_host", 00:13:46.108 "params": { 00:13:46.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.108 "host": "nqn.2016-06.io.spdk:host1", 00:13:46.108 "psk": "key0" 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "nvmf_subsystem_add_ns", 00:13:46.108 "params": { 00:13:46.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.108 "namespace": { 00:13:46.108 "nsid": 1, 00:13:46.108 "bdev_name": "malloc0", 00:13:46.108 "nguid": "CFD4D1F17C774735BD8F178401A7DD00", 00:13:46.108 "uuid": "cfd4d1f1-7c77-4735-bd8f-178401a7dd00", 00:13:46.108 "no_auto_visible": false 00:13:46.108 } 00:13:46.108 } 00:13:46.108 }, 00:13:46.108 { 00:13:46.108 "method": "nvmf_subsystem_add_listener", 00:13:46.108 "params": { 00:13:46.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.108 "listen_address": { 00:13:46.108 "trtype": "TCP", 00:13:46.108 "adrfam": "IPv4", 00:13:46.108 "traddr": "10.0.0.2", 00:13:46.108 "trsvcid": "4420" 00:13:46.108 }, 00:13:46.108 "secure_channel": false, 00:13:46.108 "sock_impl": "ssl" 00:13:46.108 } 00:13:46.108 } 00:13:46.108 ] 00:13:46.108 } 00:13:46.108 ] 00:13:46.108 }' 00:13:46.108 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:46.368 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:13:46.368 "subsystems": [ 00:13:46.368 { 00:13:46.368 "subsystem": "keyring", 00:13:46.368 "config": [ 00:13:46.368 { 00:13:46.368 "method": "keyring_file_add_key", 00:13:46.368 "params": { 00:13:46.368 "name": "key0", 00:13:46.368 "path": "/tmp/tmp.vFu5WQSIs8" 00:13:46.368 } 00:13:46.368 } 00:13:46.368 ] 00:13:46.368 }, 00:13:46.368 { 00:13:46.368 "subsystem": "iobuf", 00:13:46.368 "config": [ 00:13:46.368 { 00:13:46.368 "method": "iobuf_set_options", 00:13:46.368 "params": { 00:13:46.368 "small_pool_count": 8192, 00:13:46.368 "large_pool_count": 1024, 00:13:46.368 "small_bufsize": 8192, 00:13:46.368 "large_bufsize": 135168 00:13:46.368 } 00:13:46.368 } 00:13:46.368 ] 00:13:46.368 }, 00:13:46.368 { 00:13:46.368 "subsystem": "sock", 00:13:46.368 "config": [ 00:13:46.368 { 00:13:46.368 "method": "sock_set_default_impl", 00:13:46.368 "params": { 00:13:46.368 "impl_name": "uring" 00:13:46.368 } 00:13:46.368 }, 00:13:46.369 { 00:13:46.369 "method": "sock_impl_set_options", 00:13:46.369 "params": { 00:13:46.369 "impl_name": "ssl", 00:13:46.369 "recv_buf_size": 4096, 00:13:46.369 "send_buf_size": 4096, 00:13:46.369 "enable_recv_pipe": true, 00:13:46.369 "enable_quickack": false, 00:13:46.369 "enable_placement_id": 0, 00:13:46.369 "enable_zerocopy_send_server": true, 00:13:46.369 "enable_zerocopy_send_client": false, 00:13:46.369 "zerocopy_threshold": 0, 00:13:46.369 "tls_version": 0, 00:13:46.369 "enable_ktls": false 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "sock_impl_set_options", 00:13:46.369 "params": { 00:13:46.369 "impl_name": "posix", 00:13:46.369 "recv_buf_size": 2097152, 00:13:46.369 "send_buf_size": 2097152, 00:13:46.369 "enable_recv_pipe": true, 00:13:46.369 "enable_quickack": false, 00:13:46.369 "enable_placement_id": 0, 00:13:46.369 "enable_zerocopy_send_server": true, 00:13:46.369 "enable_zerocopy_send_client": false, 00:13:46.369 "zerocopy_threshold": 0, 00:13:46.369 "tls_version": 0, 00:13:46.369 "enable_ktls": false 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "sock_impl_set_options", 00:13:46.369 "params": { 00:13:46.369 "impl_name": "uring", 00:13:46.369 "recv_buf_size": 2097152, 00:13:46.369 "send_buf_size": 2097152, 00:13:46.369 "enable_recv_pipe": true, 00:13:46.369 "enable_quickack": false, 00:13:46.369 "enable_placement_id": 0, 00:13:46.369 "enable_zerocopy_send_server": false, 00:13:46.369 "enable_zerocopy_send_client": false, 00:13:46.369 "zerocopy_threshold": 0, 00:13:46.369 "tls_version": 0, 00:13:46.369 "enable_ktls": false 00:13:46.369 } 00:13:46.369 } 00:13:46.369 ] 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "subsystem": "vmd", 00:13:46.369 "config": [] 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "subsystem": "accel", 00:13:46.369 "config": [ 00:13:46.369 { 00:13:46.369 "method": "accel_set_options", 00:13:46.369 "params": { 00:13:46.369 "small_cache_size": 128, 00:13:46.369 "large_cache_size": 16, 00:13:46.369 "task_count": 2048, 00:13:46.369 "sequence_count": 2048, 00:13:46.369 "buf_count": 2048 00:13:46.369 } 00:13:46.369 } 00:13:46.369 ] 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "subsystem": "bdev", 00:13:46.369 "config": [ 00:13:46.369 { 00:13:46.369 "method": "bdev_set_options", 00:13:46.369 "params": { 00:13:46.369 "bdev_io_pool_size": 65535, 00:13:46.369 "bdev_io_cache_size": 256, 00:13:46.369 "bdev_auto_examine": true, 00:13:46.369 "iobuf_small_cache_size": 128, 00:13:46.369 "iobuf_large_cache_size": 16 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "bdev_raid_set_options", 00:13:46.369 "params": { 00:13:46.369 "process_window_size_kb": 1024 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "bdev_iscsi_set_options", 00:13:46.369 "params": { 00:13:46.369 "timeout_sec": 30 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "bdev_nvme_set_options", 00:13:46.369 "params": { 00:13:46.369 "action_on_timeout": "none", 00:13:46.369 "timeout_us": 0, 00:13:46.369 "timeout_admin_us": 0, 00:13:46.369 "keep_alive_timeout_ms": 10000, 00:13:46.369 "arbitration_burst": 0, 00:13:46.369 "low_priority_weight": 0, 00:13:46.369 "medium_priority_weight": 0, 00:13:46.369 "high_priority_weight": 0, 00:13:46.369 "nvme_adminq_poll_period_us": 10000, 00:13:46.369 "nvme_ioq_poll_period_us": 0, 00:13:46.369 "io_queue_requests": 512, 00:13:46.369 "delay_cmd_submit": true, 00:13:46.369 "transport_retry_count": 4, 00:13:46.369 "bdev_retry_count": 3, 00:13:46.369 "transport_ack_timeout": 0, 00:13:46.369 "ctrlr_loss_timeout_sec": 0, 00:13:46.369 "reconnect_delay_sec": 0, 00:13:46.369 "fast_io_fail_timeout_sec": 0, 00:13:46.369 "disable_auto_failback": false, 00:13:46.369 "generate_uuids": false, 00:13:46.369 "transport_tos": 0, 00:13:46.369 "nvme_error_stat": false, 00:13:46.369 "rdma_srq_size": 0, 00:13:46.369 "io_path_stat": false, 00:13:46.369 "allow_accel_sequence": false, 00:13:46.369 "rdma_max_cq_size": 0, 00:13:46.369 "rdma_cm_event_timeout_ms": 0, 00:13:46.369 "dhchap_digests": [ 00:13:46.369 "sha256", 00:13:46.369 "sha384", 00:13:46.369 "sha512" 00:13:46.369 ], 00:13:46.369 "dhchap_dhgroups": [ 00:13:46.369 "null", 00:13:46.369 "ffdhe2048", 00:13:46.369 "ffdhe3072", 00:13:46.369 "ffdhe4096", 00:13:46.369 "ffdhe6144", 00:13:46.369 "ffdhe8192" 00:13:46.369 ] 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "bdev_nvme_attach_controller", 00:13:46.369 "params": { 00:13:46.369 "name": "nvme0", 00:13:46.369 "trtype": "TCP", 00:13:46.369 "adrfam": "IPv4", 00:13:46.369 "traddr": "10.0.0.2", 00:13:46.369 "trsvcid": "4420", 00:13:46.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.369 "prchk_reftag": false, 00:13:46.369 "prchk_guard": false, 00:13:46.369 "ctrlr_loss_timeout_sec": 0, 00:13:46.369 "reconnect_delay_sec": 0, 00:13:46.369 "fast_io_fail_timeout_sec": 0, 00:13:46.369 "psk": "key0", 00:13:46.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:46.369 "hdgst": false, 00:13:46.369 "ddgst": false 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "bdev_nvme_set_hotplug", 00:13:46.369 "params": { 00:13:46.369 "period_us": 100000, 00:13:46.369 "enable": false 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "bdev_enable_histogram", 00:13:46.369 "params": { 00:13:46.369 "name": "nvme0n1", 00:13:46.369 "enable": true 00:13:46.369 } 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "method": "bdev_wait_for_examine" 00:13:46.369 } 00:13:46.369 ] 00:13:46.369 }, 00:13:46.369 { 00:13:46.369 "subsystem": "nbd", 00:13:46.369 "config": [] 00:13:46.369 } 00:13:46.369 ] 00:13:46.369 }' 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 73418 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73418 ']' 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73418 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73418 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:46.369 killing process with pid 73418 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73418' 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73418 00:13:46.369 Received shutdown signal, test time was about 1.000000 seconds 00:13:46.369 00:13:46.369 Latency(us) 00:13:46.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.369 =================================================================================================================== 00:13:46.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:46.369 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73418 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 73386 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73386 ']' 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73386 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73386 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:46.629 killing process with pid 73386 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73386' 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73386 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73386 00:13:46.629 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:13:46.629 "subsystems": [ 00:13:46.629 { 00:13:46.629 "subsystem": "keyring", 00:13:46.629 "config": [ 00:13:46.629 { 00:13:46.629 "method": "keyring_file_add_key", 00:13:46.629 "params": { 00:13:46.629 "name": "key0", 00:13:46.629 "path": "/tmp/tmp.vFu5WQSIs8" 00:13:46.629 } 00:13:46.629 } 00:13:46.629 ] 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "subsystem": "iobuf", 00:13:46.629 "config": [ 00:13:46.629 { 00:13:46.629 "method": "iobuf_set_options", 00:13:46.629 "params": { 00:13:46.629 "small_pool_count": 8192, 00:13:46.629 "large_pool_count": 1024, 00:13:46.629 "small_bufsize": 8192, 00:13:46.629 "large_bufsize": 135168 00:13:46.629 } 00:13:46.629 } 00:13:46.629 ] 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "subsystem": "sock", 00:13:46.629 "config": [ 00:13:46.629 { 00:13:46.629 "method": "sock_set_default_impl", 00:13:46.629 "params": { 00:13:46.629 "impl_name": "uring" 00:13:46.629 } 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "method": "sock_impl_set_options", 00:13:46.629 "params": { 00:13:46.629 "impl_name": "ssl", 00:13:46.629 "recv_buf_size": 4096, 00:13:46.629 "send_buf_size": 4096, 00:13:46.629 "enable_recv_pipe": true, 00:13:46.629 "enable_quickack": false, 00:13:46.629 "enable_placement_id": 0, 00:13:46.629 "enable_zerocopy_send_server": true, 00:13:46.629 "enable_zerocopy_send_client": false, 00:13:46.629 "zerocopy_threshold": 0, 00:13:46.629 "tls_version": 0, 00:13:46.629 "enable_ktls": false 00:13:46.629 } 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "method": "sock_impl_set_options", 00:13:46.629 "params": { 00:13:46.629 "impl_name": "posix", 00:13:46.629 "recv_buf_size": 2097152, 00:13:46.629 "send_buf_size": 2097152, 00:13:46.629 "enable_recv_pipe": true, 00:13:46.629 "enable_quickack": false, 00:13:46.629 "enable_placement_id": 0, 00:13:46.629 "enable_zerocopy_send_server": true, 00:13:46.629 "enable_zerocopy_send_client": false, 00:13:46.629 "zerocopy_threshold": 0, 00:13:46.629 "tls_version": 0, 00:13:46.629 "enable_ktls": false 00:13:46.629 } 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "method": "sock_impl_set_options", 00:13:46.629 "params": { 00:13:46.629 "impl_name": "uring", 00:13:46.629 "recv_buf_size": 2097152, 00:13:46.629 "send_buf_size": 2097152, 00:13:46.629 "enable_recv_pipe": true, 00:13:46.629 "enable_quickack": false, 00:13:46.629 "enable_placement_id": 0, 00:13:46.629 "enable_zerocopy_send_server": false, 00:13:46.629 "enable_zerocopy_send_client": false, 00:13:46.629 "zerocopy_threshold": 0, 00:13:46.629 "tls_version": 0, 00:13:46.629 "enable_ktls": false 00:13:46.629 } 00:13:46.629 } 00:13:46.629 ] 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "subsystem": "vmd", 00:13:46.629 "config": [] 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "subsystem": "accel", 00:13:46.629 "config": [ 00:13:46.629 { 00:13:46.629 "method": "accel_set_options", 00:13:46.629 "params": { 00:13:46.629 "small_cache_size": 128, 00:13:46.629 "large_cache_size": 16, 00:13:46.629 "task_count": 2048, 00:13:46.629 "sequence_count": 2048, 00:13:46.629 "buf_count": 2048 00:13:46.629 } 00:13:46.629 } 00:13:46.629 ] 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "subsystem": "bdev", 00:13:46.629 "config": [ 00:13:46.629 { 00:13:46.629 "method": "bdev_set_options", 00:13:46.629 "params": { 00:13:46.629 "bdev_io_pool_size": 65535, 00:13:46.629 "bdev_io_cache_size": 256, 00:13:46.629 "bdev_auto_examine": true, 00:13:46.629 "iobuf_small_cache_size": 128, 00:13:46.629 "iobuf_large_cache_size": 16 00:13:46.629 } 00:13:46.629 }, 00:13:46.629 { 00:13:46.629 "method": "bdev_raid_set_options", 00:13:46.629 "params": { 00:13:46.629 "process_window_size_kb": 1024 00:13:46.629 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "bdev_iscsi_set_options", 00:13:46.630 "params": { 00:13:46.630 "timeout_sec": 30 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "bdev_nvme_set_options", 00:13:46.630 "params": { 00:13:46.630 "action_on_timeout": "none", 00:13:46.630 "timeout_us": 0, 00:13:46.630 "timeout_admin_us": 0, 00:13:46.630 "keep_alive_timeout_ms": 10000, 00:13:46.630 "arbitration_burst": 0, 00:13:46.630 "low_priority_weight": 0, 00:13:46.630 "medium_priority_weight": 0, 00:13:46.630 "high_priority_weight": 0, 00:13:46.630 "nvme_adminq_poll_period_us": 10000, 00:13:46.630 "nvme_ioq_poll_period_us": 0, 00:13:46.630 "io_queue_requests": 0, 00:13:46.630 "delay_cmd_submit": true, 00:13:46.630 "transport_retry_count": 4, 00:13:46.630 "bdev_retry_count": 3, 00:13:46.630 "transport_ack_timeout": 0, 00:13:46.630 "ctrlr_loss_timeout_sec": 0, 00:13:46.630 "reconnect_delay_sec": 0, 00:13:46.630 "fast_io_fail_timeout_sec": 0, 00:13:46.630 "disable_auto_failback": false, 00:13:46.630 "generate_uuids": false, 00:13:46.630 "transport_tos": 0, 00:13:46.630 "nvme_error_stat": false, 00:13:46.630 "rdma_srq_size": 0, 00:13:46.630 "io_path_stat": false, 00:13:46.630 "allow_accel_sequence": false, 00:13:46.630 "rdma_max_cq_size": 0, 00:13:46.630 "rdma_cm_event_timeout_ms": 0, 00:13:46.630 "dhchap_digests": [ 00:13:46.630 "sha256", 00:13:46.630 "sha384", 00:13:46.630 "sha512" 00:13:46.630 ], 00:13:46.630 "dhchap_dhgroups": [ 00:13:46.630 "null", 00:13:46.630 "ffdhe2048", 00:13:46.630 "ffdhe3072", 00:13:46.630 "ffdhe4096", 00:13:46.630 "ffdhe6144", 00:13:46.630 "ffdhe8192" 00:13:46.630 ] 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "bdev_nvme_set_hotplug", 00:13:46.630 "params": { 00:13:46.630 "period_us": 100000, 00:13:46.630 "enable": false 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "bdev_malloc_create", 00:13:46.630 "params": { 00:13:46.630 "name": "malloc0", 00:13:46.630 "num_blocks": 8192, 00:13:46.630 "block_size": 4096, 00:13:46.630 "physical_block_size": 4096, 00:13:46.630 "uuid": "cfd4d1f1-7c77-4735-bd8f-178401a7dd00", 00:13:46.630 "optimal_io_boundary": 0 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "bdev_wait_for_examine" 00:13:46.630 } 00:13:46.630 ] 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "subsystem": "nbd", 00:13:46.630 "config": [] 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "subsystem": "scheduler", 00:13:46.630 "config": [ 00:13:46.630 { 00:13:46.630 "method": "framework_set_scheduler", 00:13:46.630 "params": { 00:13:46.630 "name": "static" 00:13:46.630 } 00:13:46.630 } 00:13:46.630 ] 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "subsystem": "nvmf", 00:13:46.630 "config": [ 00:13:46.630 { 00:13:46.630 "method": "nvmf_set_config", 00:13:46.630 "params": { 00:13:46.630 "discovery_filter": "match_any", 00:13:46.630 "admin_cmd_passthru": { 00:13:46.630 "identify_ctrlr": false 00:13:46.630 } 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "nvmf_set_max_subsystems", 00:13:46.630 "params": { 00:13:46.630 "max_subsystems": 1024 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "nvmf_set_crdt", 00:13:46.630 "params": { 00:13:46.630 "crdt1": 0, 00:13:46.630 "crdt2": 0, 00:13:46.630 "crdt3": 0 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "nvmf_create_transport", 00:13:46.630 "params": { 00:13:46.630 "trtype": "TCP", 00:13:46.630 "max_queue_depth": 128, 00:13:46.630 "max_io_qpairs_per_ctrlr": 127, 00:13:46.630 "in_capsule_data_size": 4096, 00:13:46.630 "max_io_size": 131072, 00:13:46.630 "io_unit_size": 131072, 00:13:46.630 "max_aq_depth": 128, 00:13:46.630 "num_shared_buffers": 511, 00:13:46.630 "buf_cache_size": 4294967295, 00:13:46.630 "dif_insert_or_strip": false, 00:13:46.630 "zcopy": false, 00:13:46.630 "c2h_success": false, 00:13:46.630 "sock_priority": 0, 00:13:46.630 "abort_timeout_sec": 1, 00:13:46.630 "ack_timeout": 0, 00:13:46.630 "data_wr_pool_size": 0 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "nvmf_create_subsystem", 00:13:46.630 "params": { 00:13:46.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.630 "allow_any_host": false, 00:13:46.630 "serial_number": "00000000000000000000", 00:13:46.630 "model_number": "SPDK bdev Controller", 00:13:46.630 "max_namespaces": 32, 00:13:46.630 "min_cntlid": 1, 00:13:46.630 "max_cntlid": 65519, 00:13:46.630 "ana_reporting": false 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "nvmf_subsystem_add_host", 00:13:46.630 "params": { 00:13:46.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.630 "host": "nqn.2016-06.io.spdk:host1", 00:13:46.630 "psk": "key0" 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "nvmf_subsystem_add_ns", 00:13:46.630 "params": { 00:13:46.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.630 "namespace": { 00:13:46.630 "nsid": 1, 00:13:46.630 "bdev_name": "malloc0", 00:13:46.630 "nguid": "CFD4D1F17C774735BD8F178401A7DD00", 00:13:46.630 "uuid": "cfd4d1f1-7c77-4735-bd8f-178401a7dd00", 00:13:46.630 "no_auto_visible": false 00:13:46.630 } 00:13:46.630 } 00:13:46.630 }, 00:13:46.630 { 00:13:46.630 "method": "nvmf_subsystem_add_listener", 00:13:46.630 "params": { 00:13:46.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.630 "listen_address": { 00:13:46.630 "trtype": "TCP", 00:13:46.630 "adrfam": "IPv4", 00:13:46.630 "traddr": "10.0.0.2", 00:13:46.630 "trsvcid": "4420" 00:13:46.630 }, 00:13:46.630 "secure_channel": false, 00:13:46.630 "sock_impl": "ssl" 00:13:46.630 } 00:13:46.630 } 00:13:46.630 ] 00:13:46.630 } 00:13:46.630 ] 00:13:46.630 }' 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73473 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73473 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73473 ']' 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.630 21:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.889 [2024-07-15 21:27:20.047024] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:46.889 [2024-07-15 21:27:20.047091] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.889 [2024-07-15 21:27:20.175935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.148 [2024-07-15 21:27:20.269245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.148 [2024-07-15 21:27:20.269284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.148 [2024-07-15 21:27:20.269293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.148 [2024-07-15 21:27:20.269301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.148 [2024-07-15 21:27:20.269308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.148 [2024-07-15 21:27:20.269372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.148 [2024-07-15 21:27:20.423004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.148 [2024-07-15 21:27:20.490510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.407 [2024-07-15 21:27:20.522397] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:47.407 [2024-07-15 21:27:20.522575] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73505 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73505 /var/tmp/bdevperf.sock 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73505 ']' 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:47.667 21:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:13:47.667 "subsystems": [ 00:13:47.667 { 00:13:47.667 "subsystem": "keyring", 00:13:47.667 "config": [ 00:13:47.667 { 00:13:47.667 "method": "keyring_file_add_key", 00:13:47.667 "params": { 00:13:47.667 "name": "key0", 00:13:47.667 "path": "/tmp/tmp.vFu5WQSIs8" 00:13:47.667 } 00:13:47.667 } 00:13:47.667 ] 00:13:47.667 }, 00:13:47.667 { 00:13:47.667 "subsystem": "iobuf", 00:13:47.667 "config": [ 00:13:47.667 { 00:13:47.667 "method": "iobuf_set_options", 00:13:47.667 "params": { 00:13:47.667 "small_pool_count": 8192, 00:13:47.667 "large_pool_count": 1024, 00:13:47.667 "small_bufsize": 8192, 00:13:47.667 "large_bufsize": 135168 00:13:47.667 } 00:13:47.667 } 00:13:47.667 ] 00:13:47.667 }, 00:13:47.667 { 00:13:47.667 "subsystem": "sock", 00:13:47.667 "config": [ 00:13:47.667 { 00:13:47.667 "method": "sock_set_default_impl", 00:13:47.667 "params": { 00:13:47.667 "impl_name": "uring" 00:13:47.667 } 00:13:47.667 }, 00:13:47.667 { 00:13:47.667 "method": "sock_impl_set_options", 00:13:47.667 "params": { 00:13:47.667 "impl_name": "ssl", 00:13:47.667 "recv_buf_size": 4096, 00:13:47.667 "send_buf_size": 4096, 00:13:47.667 "enable_recv_pipe": true, 00:13:47.667 "enable_quickack": false, 00:13:47.667 "enable_placement_id": 0, 00:13:47.667 "enable_zerocopy_send_server": true, 00:13:47.667 "enable_zerocopy_send_client": false, 00:13:47.667 "zerocopy_threshold": 0, 00:13:47.667 "tls_version": 0, 00:13:47.667 "enable_ktls": false 00:13:47.667 } 00:13:47.667 }, 00:13:47.667 { 00:13:47.667 "method": "sock_impl_set_options", 00:13:47.667 "params": { 00:13:47.667 "impl_name": "posix", 00:13:47.667 "recv_buf_size": 2097152, 00:13:47.667 "send_buf_size": 2097152, 00:13:47.667 "enable_recv_pipe": true, 00:13:47.667 "enable_quickack": false, 00:13:47.667 "enable_placement_id": 0, 00:13:47.667 "enable_zerocopy_send_server": true, 00:13:47.667 "enable_zerocopy_send_client": false, 00:13:47.667 "zerocopy_threshold": 0, 00:13:47.667 "tls_version": 0, 00:13:47.667 "enable_ktls": false 00:13:47.667 } 00:13:47.667 }, 00:13:47.667 { 00:13:47.667 "method": "sock_impl_set_options", 00:13:47.667 "params": { 00:13:47.667 "impl_name": "uring", 00:13:47.667 "recv_buf_size": 2097152, 00:13:47.667 "send_buf_size": 2097152, 00:13:47.667 "enable_recv_pipe": true, 00:13:47.667 "enable_quickack": false, 00:13:47.667 "enable_placement_id": 0, 00:13:47.667 "enable_zerocopy_send_server": false, 00:13:47.667 "enable_zerocopy_send_client": false, 00:13:47.667 "zerocopy_threshold": 0, 00:13:47.668 "tls_version": 0, 00:13:47.668 "enable_ktls": false 00:13:47.668 } 00:13:47.668 } 00:13:47.668 ] 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "subsystem": "vmd", 00:13:47.668 "config": [] 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "subsystem": "accel", 00:13:47.668 "config": [ 00:13:47.668 { 00:13:47.668 "method": "accel_set_options", 00:13:47.668 "params": { 00:13:47.668 "small_cache_size": 128, 00:13:47.668 "large_cache_size": 16, 00:13:47.668 "task_count": 2048, 00:13:47.668 "sequence_count": 2048, 00:13:47.668 "buf_count": 2048 00:13:47.668 } 00:13:47.668 } 00:13:47.668 ] 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "subsystem": "bdev", 00:13:47.668 "config": [ 00:13:47.668 { 00:13:47.668 "method": "bdev_set_options", 00:13:47.668 "params": { 00:13:47.668 "bdev_io_pool_size": 65535, 00:13:47.668 "bdev_io_cache_size": 256, 00:13:47.668 "bdev_auto_examine": true, 00:13:47.668 "iobuf_small_cache_size": 128, 00:13:47.668 "iobuf_large_cache_size": 16 00:13:47.668 } 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "method": "bdev_raid_set_options", 00:13:47.668 "params": { 00:13:47.668 "process_window_size_kb": 1024 00:13:47.668 } 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "method": "bdev_iscsi_set_options", 00:13:47.668 "params": { 00:13:47.668 "timeout_sec": 30 00:13:47.668 } 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "method": "bdev_nvme_set_options", 00:13:47.668 "params": { 00:13:47.668 "action_on_timeout": "none", 00:13:47.668 "timeout_us": 0, 00:13:47.668 "timeout_admin_us": 0, 00:13:47.668 "keep_alive_timeout_ms": 10000, 00:13:47.668 "arbitration_burst": 0, 00:13:47.668 "low_priority_weight": 0, 00:13:47.668 "medium_priority_weight": 0, 00:13:47.668 "high_priority_weight": 0, 00:13:47.668 "nvme_adminq_poll_period_us": 10000, 00:13:47.668 "nvme_ioq_poll_period_us": 0, 00:13:47.668 "io_queue_requests": 512, 00:13:47.668 "delay_cmd_submit": true, 00:13:47.668 "transport_retry_count": 4, 00:13:47.668 "bdev_retry_count": 3, 00:13:47.668 "transport_ack_timeout": 0, 00:13:47.668 "ctrlr_loss_timeout_sec": 0, 00:13:47.668 "reconnect_delay_sec": 0, 00:13:47.668 "fast_io_fail_timeout_sec": 0, 00:13:47.668 "disable_auto_failback": false, 00:13:47.668 "generate_uuids": false, 00:13:47.668 "transport_tos": 0, 00:13:47.668 "nvme_error_stat": false, 00:13:47.668 "rdma_srq_size": 0, 00:13:47.668 "io_path_stat": false, 00:13:47.668 "allow_accel_sequence": false, 00:13:47.668 "rdma_max_cq_size": 0, 00:13:47.668 "rdma_cm_event_timeout_ms": 0, 00:13:47.668 "dhchap_digests": [ 00:13:47.668 "sha256", 00:13:47.668 "sha384", 00:13:47.668 "sha512" 00:13:47.668 ], 00:13:47.668 "dhchap_dhgroups": [ 00:13:47.668 "null", 00:13:47.668 "ffdhe2048", 00:13:47.668 "ffdhe3072", 00:13:47.668 "ffdhe4096", 00:13:47.668 "ffdhe6144", 00:13:47.668 "ffdhe8192" 00:13:47.668 ] 00:13:47.668 } 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "method": "bdev_nvme_attach_controller", 00:13:47.668 "params": { 00:13:47.668 "name": "nvme0", 00:13:47.668 "trtype": "TCP", 00:13:47.668 "adrfam": "IPv4", 00:13:47.668 "traddr": "10.0.0.2", 00:13:47.668 "trsvcid": "4420", 00:13:47.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.668 "prchk_reftag": false, 00:13:47.668 "prchk_guard": false, 00:13:47.668 "ctrlr_loss_timeout_sec": 0, 00:13:47.668 "reconnect_delay_sec": 0, 00:13:47.668 "fast_io_fail_timeout_sec": 0, 00:13:47.668 "psk": "key0", 00:13:47.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.668 "hdgst": false, 00:13:47.668 "ddgst": false 00:13:47.668 } 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "method": "bdev_nvme_set_hotplug", 00:13:47.668 "params": { 00:13:47.668 "period_us": 100000, 00:13:47.668 "enable": false 00:13:47.668 } 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "method": "bdev_enable_histogram", 00:13:47.668 "params": { 00:13:47.668 "name": "nvme0n1", 00:13:47.668 "enable": true 00:13:47.668 } 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "method": "bdev_wait_for_examine" 00:13:47.668 } 00:13:47.668 ] 00:13:47.668 }, 00:13:47.668 { 00:13:47.668 "subsystem": "nbd", 00:13:47.668 "config": [] 00:13:47.668 } 00:13:47.668 ] 00:13:47.668 }' 00:13:47.668 [2024-07-15 21:27:20.966736] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:47.668 [2024-07-15 21:27:20.966800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73505 ] 00:13:47.928 [2024-07-15 21:27:21.107281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.928 [2024-07-15 21:27:21.185425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.188 [2024-07-15 21:27:21.309561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.188 [2024-07-15 21:27:21.352158] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:48.447 21:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.447 21:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:48.447 21:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:13:48.447 21:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:48.706 21:27:22 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.706 21:27:22 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:48.968 Running I/O for 1 seconds... 00:13:49.909 00:13:49.909 Latency(us) 00:13:49.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.909 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.909 Verification LBA range: start 0x0 length 0x2000 00:13:49.909 nvme0n1 : 1.01 5734.39 22.40 0.00 0.00 22166.78 4316.43 17792.10 00:13:49.909 =================================================================================================================== 00:13:49.909 Total : 5734.39 22.40 0.00 0.00 22166.78 4316.43 17792.10 00:13:49.909 0 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:49.909 nvmf_trace.0 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 73505 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73505 ']' 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73505 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.909 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73505 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73505' 00:13:50.167 killing process with pid 73505 00:13:50.167 Received shutdown signal, test time was about 1.000000 seconds 00:13:50.167 00:13:50.167 Latency(us) 00:13:50.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.167 =================================================================================================================== 00:13:50.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73505 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73505 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:50.167 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:50.167 rmmod nvme_tcp 00:13:50.167 rmmod nvme_fabrics 00:13:50.425 rmmod nvme_keyring 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73473 ']' 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73473 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73473 ']' 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73473 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73473 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.425 killing process with pid 73473 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73473' 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73473 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73473 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.425 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.684 21:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:50.684 21:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ua2dEy1srG /tmp/tmp.aDf6QsOJqu /tmp/tmp.vFu5WQSIs8 00:13:50.684 00:13:50.684 real 1m20.314s 00:13:50.684 user 2m0.395s 00:13:50.684 sys 0m29.406s 00:13:50.684 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.684 21:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.684 ************************************ 00:13:50.684 END TEST nvmf_tls 00:13:50.684 ************************************ 00:13:50.684 21:27:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:50.684 21:27:23 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:50.684 21:27:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.684 21:27:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.684 21:27:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.684 ************************************ 00:13:50.684 START TEST nvmf_fips 00:13:50.684 ************************************ 00:13:50.684 21:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:50.684 * Looking for test storage... 00:13:50.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:50.942 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.942 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:50.942 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.942 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:50.943 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:13:50.944 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:13:51.203 Error setting digest 00:13:51.204 0002A6AF557F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:13:51.204 0002A6AF557F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:51.204 Cannot find device "nvmf_tgt_br" 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.204 Cannot find device "nvmf_tgt_br2" 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:51.204 Cannot find device "nvmf_tgt_br" 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:51.204 Cannot find device "nvmf_tgt_br2" 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:51.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:51.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:51.204 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:51.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:13:51.463 00:13:51.463 --- 10.0.0.2 ping statistics --- 00:13:51.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.463 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:51.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:13:51.463 00:13:51.463 --- 10.0.0.3 ping statistics --- 00:13:51.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.463 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:13:51.463 00:13:51.463 --- 10.0.0.1 ping statistics --- 00:13:51.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.463 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73773 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73773 00:13:51.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 73773 ']' 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:51.463 21:27:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:51.722 [2024-07-15 21:27:24.854215] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:51.722 [2024-07-15 21:27:24.854278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.722 [2024-07-15 21:27:24.993626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.722 [2024-07-15 21:27:25.076325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.722 [2024-07-15 21:27:25.076372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.722 [2024-07-15 21:27:25.076382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.722 [2024-07-15 21:27:25.076390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.722 [2024-07-15 21:27:25.076397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.722 [2024-07-15 21:27:25.076426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.981 [2024-07-15 21:27:25.117314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:52.549 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:52.809 [2024-07-15 21:27:25.918549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.809 [2024-07-15 21:27:25.934470] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:52.809 [2024-07-15 21:27:25.934632] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.809 [2024-07-15 21:27:25.963205] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:52.809 malloc0 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73808 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73808 /var/tmp/bdevperf.sock 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 73808 ']' 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.809 21:27:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:52.809 [2024-07-15 21:27:26.061376] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:13:52.809 [2024-07-15 21:27:26.061445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73808 ] 00:13:53.068 [2024-07-15 21:27:26.202453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.068 [2024-07-15 21:27:26.285611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.068 [2024-07-15 21:27:26.326837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:53.636 21:27:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.636 21:27:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:13:53.636 21:27:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:53.895 [2024-07-15 21:27:27.071575] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:53.895 [2024-07-15 21:27:27.071675] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:53.895 TLSTESTn1 00:13:53.895 21:27:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:53.895 Running I/O for 10 seconds... 00:14:06.100 00:14:06.100 Latency(us) 00:14:06.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.100 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:06.100 Verification LBA range: start 0x0 length 0x2000 00:14:06.100 TLSTESTn1 : 10.01 5684.34 22.20 0.00 0.00 22483.16 4895.46 20002.96 00:14:06.100 =================================================================================================================== 00:14:06.100 Total : 5684.34 22.20 0.00 0.00 22483.16 4895.46 20002.96 00:14:06.100 0 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:06.100 nvmf_trace.0 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73808 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 73808 ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 73808 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73808 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:06.100 killing process with pid 73808 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73808' 00:14:06.100 Received shutdown signal, test time was about 10.000000 seconds 00:14:06.100 00:14:06.100 Latency(us) 00:14:06.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.100 =================================================================================================================== 00:14:06.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 73808 00:14:06.100 [2024-07-15 21:27:37.401492] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 73808 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.100 rmmod nvme_tcp 00:14:06.100 rmmod nvme_fabrics 00:14:06.100 rmmod nvme_keyring 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73773 ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73773 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 73773 ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 73773 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73773 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:06.100 killing process with pid 73773 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73773' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 73773 00:14:06.100 [2024-07-15 21:27:37.768323] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 73773 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.100 21:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.100 21:27:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:06.100 21:27:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:06.100 00:14:06.100 real 0m14.108s 00:14:06.100 user 0m18.225s 00:14:06.100 sys 0m6.130s 00:14:06.100 21:27:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.100 21:27:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:06.100 ************************************ 00:14:06.100 END TEST nvmf_fips 00:14:06.100 ************************************ 00:14:06.100 21:27:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:06.100 21:27:38 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:06.100 21:27:38 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:06.100 21:27:38 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:06.100 21:27:38 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.100 21:27:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.100 21:27:38 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:06.100 21:27:38 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.100 21:27:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.101 21:27:38 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:06.101 21:27:38 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:06.101 21:27:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.101 21:27:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.101 21:27:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.101 ************************************ 00:14:06.101 START TEST nvmf_identify 00:14:06.101 ************************************ 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:06.101 * Looking for test storage... 00:14:06.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:06.101 Cannot find device "nvmf_tgt_br" 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.101 Cannot find device "nvmf_tgt_br2" 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:06.101 Cannot find device "nvmf_tgt_br" 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:06.101 Cannot find device "nvmf_tgt_br2" 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.101 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:06.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:14:06.102 00:14:06.102 --- 10.0.0.2 ping statistics --- 00:14:06.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.102 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:06.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:14:06.102 00:14:06.102 --- 10.0.0.3 ping statistics --- 00:14:06.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.102 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:14:06.102 00:14:06.102 --- 10.0.0.1 ping statistics --- 00:14:06.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.102 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74160 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74160 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 74160 ']' 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.102 21:27:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.102 [2024-07-15 21:27:38.839716] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:06.102 [2024-07-15 21:27:38.839777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.102 [2024-07-15 21:27:38.968036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.102 [2024-07-15 21:27:39.053234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.102 [2024-07-15 21:27:39.053287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.102 [2024-07-15 21:27:39.053296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.102 [2024-07-15 21:27:39.053304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.102 [2024-07-15 21:27:39.053311] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.102 [2024-07-15 21:27:39.053549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.102 [2024-07-15 21:27:39.053736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.102 [2024-07-15 21:27:39.054680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.102 [2024-07-15 21:27:39.054681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.102 [2024-07-15 21:27:39.096831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.360 [2024-07-15 21:27:39.685785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.360 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 Malloc0 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 [2024-07-15 21:27:39.801028] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.617 [ 00:14:06.617 { 00:14:06.617 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.617 "subtype": "Discovery", 00:14:06.617 "listen_addresses": [ 00:14:06.617 { 00:14:06.617 "trtype": "TCP", 00:14:06.617 "adrfam": "IPv4", 00:14:06.617 "traddr": "10.0.0.2", 00:14:06.617 "trsvcid": "4420" 00:14:06.617 } 00:14:06.617 ], 00:14:06.617 "allow_any_host": true, 00:14:06.617 "hosts": [] 00:14:06.617 }, 00:14:06.617 { 00:14:06.617 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.617 "subtype": "NVMe", 00:14:06.617 "listen_addresses": [ 00:14:06.617 { 00:14:06.617 "trtype": "TCP", 00:14:06.617 "adrfam": "IPv4", 00:14:06.617 "traddr": "10.0.0.2", 00:14:06.617 "trsvcid": "4420" 00:14:06.617 } 00:14:06.617 ], 00:14:06.617 "allow_any_host": true, 00:14:06.617 "hosts": [], 00:14:06.617 "serial_number": "SPDK00000000000001", 00:14:06.617 "model_number": "SPDK bdev Controller", 00:14:06.617 "max_namespaces": 32, 00:14:06.617 "min_cntlid": 1, 00:14:06.617 "max_cntlid": 65519, 00:14:06.617 "namespaces": [ 00:14:06.617 { 00:14:06.617 "nsid": 1, 00:14:06.617 "bdev_name": "Malloc0", 00:14:06.617 "name": "Malloc0", 00:14:06.617 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:06.617 "eui64": "ABCDEF0123456789", 00:14:06.617 "uuid": "1029dab6-39bb-4b1b-b9c2-7ae94780c5fd" 00:14:06.617 } 00:14:06.617 ] 00:14:06.617 } 00:14:06.617 ] 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.617 21:27:39 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:06.617 [2024-07-15 21:27:39.875835] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:06.617 [2024-07-15 21:27:39.875879] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74195 ] 00:14:06.880 [2024-07-15 21:27:40.008558] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:06.880 [2024-07-15 21:27:40.008609] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:06.880 [2024-07-15 21:27:40.008614] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:06.880 [2024-07-15 21:27:40.008626] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:06.880 [2024-07-15 21:27:40.008632] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:06.880 [2024-07-15 21:27:40.008749] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:06.880 [2024-07-15 21:27:40.008786] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7dc2c0 0 00:14:06.880 [2024-07-15 21:27:40.014836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:06.880 [2024-07-15 21:27:40.014854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:06.880 [2024-07-15 21:27:40.014859] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:06.880 [2024-07-15 21:27:40.014863] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:06.880 [2024-07-15 21:27:40.014909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.014914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.014919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.880 [2024-07-15 21:27:40.014930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:06.880 [2024-07-15 21:27:40.014952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.880 [2024-07-15 21:27:40.022836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.880 [2024-07-15 21:27:40.022846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.880 [2024-07-15 21:27:40.022850] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.022855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.880 [2024-07-15 21:27:40.022864] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:06.880 [2024-07-15 21:27:40.022871] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:06.880 [2024-07-15 21:27:40.022877] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:06.880 [2024-07-15 21:27:40.022892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.022897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.022901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.880 [2024-07-15 21:27:40.022908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.880 [2024-07-15 21:27:40.022927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.880 [2024-07-15 21:27:40.022970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.880 [2024-07-15 21:27:40.022976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.880 [2024-07-15 21:27:40.022980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.022984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.880 [2024-07-15 21:27:40.022989] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:06.880 [2024-07-15 21:27:40.022996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:06.880 [2024-07-15 21:27:40.023002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.880 [2024-07-15 21:27:40.023017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.880 [2024-07-15 21:27:40.023030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.880 [2024-07-15 21:27:40.023074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.880 [2024-07-15 21:27:40.023080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.880 [2024-07-15 21:27:40.023084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.880 [2024-07-15 21:27:40.023093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:06.880 [2024-07-15 21:27:40.023100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:06.880 [2024-07-15 21:27:40.023107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.880 [2024-07-15 21:27:40.023121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.880 [2024-07-15 21:27:40.023133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.880 [2024-07-15 21:27:40.023175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.880 [2024-07-15 21:27:40.023181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.880 [2024-07-15 21:27:40.023184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.880 [2024-07-15 21:27:40.023194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:06.880 [2024-07-15 21:27:40.023202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.880 [2024-07-15 21:27:40.023216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.880 [2024-07-15 21:27:40.023228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.880 [2024-07-15 21:27:40.023264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.880 [2024-07-15 21:27:40.023270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.880 [2024-07-15 21:27:40.023274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.880 [2024-07-15 21:27:40.023283] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:06.880 [2024-07-15 21:27:40.023288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:06.880 [2024-07-15 21:27:40.023295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:06.880 [2024-07-15 21:27:40.023400] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:06.880 [2024-07-15 21:27:40.023406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:06.880 [2024-07-15 21:27:40.023414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.880 [2024-07-15 21:27:40.023422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.880 [2024-07-15 21:27:40.023428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.880 [2024-07-15 21:27:40.023440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.880 [2024-07-15 21:27:40.023479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.881 [2024-07-15 21:27:40.023485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.881 [2024-07-15 21:27:40.023489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.881 [2024-07-15 21:27:40.023497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:06.881 [2024-07-15 21:27:40.023505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.023519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.881 [2024-07-15 21:27:40.023531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.881 [2024-07-15 21:27:40.023570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.881 [2024-07-15 21:27:40.023576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.881 [2024-07-15 21:27:40.023580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.881 [2024-07-15 21:27:40.023588] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:06.881 [2024-07-15 21:27:40.023593] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:06.881 [2024-07-15 21:27:40.023600] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:06.881 [2024-07-15 21:27:40.023609] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:06.881 [2024-07-15 21:27:40.023618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.023628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.881 [2024-07-15 21:27:40.023640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.881 [2024-07-15 21:27:40.023706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.881 [2024-07-15 21:27:40.023712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.881 [2024-07-15 21:27:40.023716] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023720] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7dc2c0): datao=0, datal=4096, cccid=0 00:14:06.881 [2024-07-15 21:27:40.023725] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81d940) on tqpair(0x7dc2c0): expected_datao=0, payload_size=4096 00:14:06.881 [2024-07-15 21:27:40.023730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023737] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023741] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.881 [2024-07-15 21:27:40.023754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.881 [2024-07-15 21:27:40.023758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.881 [2024-07-15 21:27:40.023769] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:06.881 [2024-07-15 21:27:40.023774] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:06.881 [2024-07-15 21:27:40.023779] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:06.881 [2024-07-15 21:27:40.023784] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:06.881 [2024-07-15 21:27:40.023789] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:06.881 [2024-07-15 21:27:40.023794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:06.881 [2024-07-15 21:27:40.023802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:06.881 [2024-07-15 21:27:40.023808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.023831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.881 [2024-07-15 21:27:40.023844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.881 [2024-07-15 21:27:40.023889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.881 [2024-07-15 21:27:40.023895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.881 [2024-07-15 21:27:40.023899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.881 [2024-07-15 21:27:40.023909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.023922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.881 [2024-07-15 21:27:40.023929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.023942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.881 [2024-07-15 21:27:40.023948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.023961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.881 [2024-07-15 21:27:40.023967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.023975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.023980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.881 [2024-07-15 21:27:40.023985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:06.881 [2024-07-15 21:27:40.023996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:06.881 [2024-07-15 21:27:40.024003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.024013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.881 [2024-07-15 21:27:40.024027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81d940, cid 0, qid 0 00:14:06.881 [2024-07-15 21:27:40.024032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81dac0, cid 1, qid 0 00:14:06.881 [2024-07-15 21:27:40.024037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81dc40, cid 2, qid 0 00:14:06.881 [2024-07-15 21:27:40.024041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.881 [2024-07-15 21:27:40.024046] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81df40, cid 4, qid 0 00:14:06.881 [2024-07-15 21:27:40.024110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.881 [2024-07-15 21:27:40.024115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.881 [2024-07-15 21:27:40.024119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81df40) on tqpair=0x7dc2c0 00:14:06.881 [2024-07-15 21:27:40.024128] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:06.881 [2024-07-15 21:27:40.024136] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:06.881 [2024-07-15 21:27:40.024146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.024156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.881 [2024-07-15 21:27:40.024169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81df40, cid 4, qid 0 00:14:06.881 [2024-07-15 21:27:40.024214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.881 [2024-07-15 21:27:40.024220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.881 [2024-07-15 21:27:40.024223] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024227] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7dc2c0): datao=0, datal=4096, cccid=4 00:14:06.881 [2024-07-15 21:27:40.024232] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81df40) on tqpair(0x7dc2c0): expected_datao=0, payload_size=4096 00:14:06.881 [2024-07-15 21:27:40.024237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024243] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024247] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.881 [2024-07-15 21:27:40.024260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.881 [2024-07-15 21:27:40.024264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81df40) on tqpair=0x7dc2c0 00:14:06.881 [2024-07-15 21:27:40.024279] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:06.881 [2024-07-15 21:27:40.024302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7dc2c0) 00:14:06.881 [2024-07-15 21:27:40.024312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.881 [2024-07-15 21:27:40.024319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.881 [2024-07-15 21:27:40.024326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7dc2c0) 00:14:06.882 [2024-07-15 21:27:40.024332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.882 [2024-07-15 21:27:40.024349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81df40, cid 4, qid 0 00:14:06.882 [2024-07-15 21:27:40.024355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81e0c0, cid 5, qid 0 00:14:06.882 [2024-07-15 21:27:40.024432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.882 [2024-07-15 21:27:40.024438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.882 [2024-07-15 21:27:40.024442] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024446] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7dc2c0): datao=0, datal=1024, cccid=4 00:14:06.882 [2024-07-15 21:27:40.024451] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81df40) on tqpair(0x7dc2c0): expected_datao=0, payload_size=1024 00:14:06.882 [2024-07-15 21:27:40.024455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024461] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024465] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.882 [2024-07-15 21:27:40.024476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.882 [2024-07-15 21:27:40.024479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81e0c0) on tqpair=0x7dc2c0 00:14:06.882 [2024-07-15 21:27:40.024496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.882 [2024-07-15 21:27:40.024502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.882 [2024-07-15 21:27:40.024506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81df40) on tqpair=0x7dc2c0 00:14:06.882 [2024-07-15 21:27:40.024527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7dc2c0) 00:14:06.882 [2024-07-15 21:27:40.024537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.882 [2024-07-15 21:27:40.024554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81df40, cid 4, qid 0 00:14:06.882 [2024-07-15 21:27:40.024603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.882 [2024-07-15 21:27:40.024609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.882 [2024-07-15 21:27:40.024612] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024616] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7dc2c0): datao=0, datal=3072, cccid=4 00:14:06.882 [2024-07-15 21:27:40.024621] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81df40) on tqpair(0x7dc2c0): expected_datao=0, payload_size=3072 00:14:06.882 [2024-07-15 21:27:40.024626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024632] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024636] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.882 [2024-07-15 21:27:40.024649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.882 [2024-07-15 21:27:40.024652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81df40) on tqpair=0x7dc2c0 00:14:06.882 [2024-07-15 21:27:40.024664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7dc2c0) 00:14:06.882 [2024-07-15 21:27:40.024675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.882 [2024-07-15 21:27:40.024691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81df40, cid 4, qid 0 00:14:06.882 [2024-07-15 21:27:40.024740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.882 [2024-07-15 21:27:40.024746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.882 [2024-07-15 21:27:40.024750] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024754] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7dc2c0): datao=0, datal=8, cccid=4 00:14:06.882 [2024-07-15 21:27:40.024758] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x81df40) on tqpair(0x7dc2c0): expected_datao=0, payload_size=8 00:14:06.882 [2024-07-15 21:27:40.024763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024769] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024772] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.882 [2024-07-15 21:27:40.024789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.882 [2024-07-15 21:27:40.024793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.882 [2024-07-15 21:27:40.024797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81df40) on tqpair=0x7dc2c0 00:14:06.882 ===================================================== 00:14:06.882 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:06.882 ===================================================== 00:14:06.882 Controller Capabilities/Features 00:14:06.882 ================================ 00:14:06.882 Vendor ID: 0000 00:14:06.882 Subsystem Vendor ID: 0000 00:14:06.882 Serial Number: .................... 00:14:06.882 Model Number: ........................................ 00:14:06.882 Firmware Version: 24.09 00:14:06.882 Recommended Arb Burst: 0 00:14:06.882 IEEE OUI Identifier: 00 00 00 00:14:06.882 Multi-path I/O 00:14:06.882 May have multiple subsystem ports: No 00:14:06.882 May have multiple controllers: No 00:14:06.882 Associated with SR-IOV VF: No 00:14:06.882 Max Data Transfer Size: 131072 00:14:06.882 Max Number of Namespaces: 0 00:14:06.882 Max Number of I/O Queues: 1024 00:14:06.882 NVMe Specification Version (VS): 1.3 00:14:06.882 NVMe Specification Version (Identify): 1.3 00:14:06.882 Maximum Queue Entries: 128 00:14:06.882 Contiguous Queues Required: Yes 00:14:06.882 Arbitration Mechanisms Supported 00:14:06.882 Weighted Round Robin: Not Supported 00:14:06.882 Vendor Specific: Not Supported 00:14:06.882 Reset Timeout: 15000 ms 00:14:06.882 Doorbell Stride: 4 bytes 00:14:06.882 NVM Subsystem Reset: Not Supported 00:14:06.882 Command Sets Supported 00:14:06.882 NVM Command Set: Supported 00:14:06.882 Boot Partition: Not Supported 00:14:06.882 Memory Page Size Minimum: 4096 bytes 00:14:06.882 Memory Page Size Maximum: 4096 bytes 00:14:06.882 Persistent Memory Region: Not Supported 00:14:06.882 Optional Asynchronous Events Supported 00:14:06.882 Namespace Attribute Notices: Not Supported 00:14:06.882 Firmware Activation Notices: Not Supported 00:14:06.882 ANA Change Notices: Not Supported 00:14:06.882 PLE Aggregate Log Change Notices: Not Supported 00:14:06.882 LBA Status Info Alert Notices: Not Supported 00:14:06.882 EGE Aggregate Log Change Notices: Not Supported 00:14:06.882 Normal NVM Subsystem Shutdown event: Not Supported 00:14:06.882 Zone Descriptor Change Notices: Not Supported 00:14:06.882 Discovery Log Change Notices: Supported 00:14:06.882 Controller Attributes 00:14:06.882 128-bit Host Identifier: Not Supported 00:14:06.882 Non-Operational Permissive Mode: Not Supported 00:14:06.882 NVM Sets: Not Supported 00:14:06.882 Read Recovery Levels: Not Supported 00:14:06.882 Endurance Groups: Not Supported 00:14:06.882 Predictable Latency Mode: Not Supported 00:14:06.882 Traffic Based Keep ALive: Not Supported 00:14:06.882 Namespace Granularity: Not Supported 00:14:06.882 SQ Associations: Not Supported 00:14:06.882 UUID List: Not Supported 00:14:06.882 Multi-Domain Subsystem: Not Supported 00:14:06.882 Fixed Capacity Management: Not Supported 00:14:06.882 Variable Capacity Management: Not Supported 00:14:06.882 Delete Endurance Group: Not Supported 00:14:06.882 Delete NVM Set: Not Supported 00:14:06.882 Extended LBA Formats Supported: Not Supported 00:14:06.882 Flexible Data Placement Supported: Not Supported 00:14:06.882 00:14:06.882 Controller Memory Buffer Support 00:14:06.882 ================================ 00:14:06.882 Supported: No 00:14:06.882 00:14:06.882 Persistent Memory Region Support 00:14:06.882 ================================ 00:14:06.882 Supported: No 00:14:06.882 00:14:06.882 Admin Command Set Attributes 00:14:06.882 ============================ 00:14:06.882 Security Send/Receive: Not Supported 00:14:06.882 Format NVM: Not Supported 00:14:06.882 Firmware Activate/Download: Not Supported 00:14:06.882 Namespace Management: Not Supported 00:14:06.882 Device Self-Test: Not Supported 00:14:06.882 Directives: Not Supported 00:14:06.882 NVMe-MI: Not Supported 00:14:06.882 Virtualization Management: Not Supported 00:14:06.882 Doorbell Buffer Config: Not Supported 00:14:06.882 Get LBA Status Capability: Not Supported 00:14:06.882 Command & Feature Lockdown Capability: Not Supported 00:14:06.882 Abort Command Limit: 1 00:14:06.882 Async Event Request Limit: 4 00:14:06.882 Number of Firmware Slots: N/A 00:14:06.882 Firmware Slot 1 Read-Only: N/A 00:14:06.882 Firmware Activation Without Reset: N/A 00:14:06.882 Multiple Update Detection Support: N/A 00:14:06.882 Firmware Update Granularity: No Information Provided 00:14:06.882 Per-Namespace SMART Log: No 00:14:06.882 Asymmetric Namespace Access Log Page: Not Supported 00:14:06.882 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:06.882 Command Effects Log Page: Not Supported 00:14:06.882 Get Log Page Extended Data: Supported 00:14:06.882 Telemetry Log Pages: Not Supported 00:14:06.882 Persistent Event Log Pages: Not Supported 00:14:06.882 Supported Log Pages Log Page: May Support 00:14:06.882 Commands Supported & Effects Log Page: Not Supported 00:14:06.882 Feature Identifiers & Effects Log Page:May Support 00:14:06.882 NVMe-MI Commands & Effects Log Page: May Support 00:14:06.882 Data Area 4 for Telemetry Log: Not Supported 00:14:06.882 Error Log Page Entries Supported: 128 00:14:06.882 Keep Alive: Not Supported 00:14:06.882 00:14:06.882 NVM Command Set Attributes 00:14:06.882 ========================== 00:14:06.882 Submission Queue Entry Size 00:14:06.882 Max: 1 00:14:06.882 Min: 1 00:14:06.882 Completion Queue Entry Size 00:14:06.882 Max: 1 00:14:06.882 Min: 1 00:14:06.883 Number of Namespaces: 0 00:14:06.883 Compare Command: Not Supported 00:14:06.883 Write Uncorrectable Command: Not Supported 00:14:06.883 Dataset Management Command: Not Supported 00:14:06.883 Write Zeroes Command: Not Supported 00:14:06.883 Set Features Save Field: Not Supported 00:14:06.883 Reservations: Not Supported 00:14:06.883 Timestamp: Not Supported 00:14:06.883 Copy: Not Supported 00:14:06.883 Volatile Write Cache: Not Present 00:14:06.883 Atomic Write Unit (Normal): 1 00:14:06.883 Atomic Write Unit (PFail): 1 00:14:06.883 Atomic Compare & Write Unit: 1 00:14:06.883 Fused Compare & Write: Supported 00:14:06.883 Scatter-Gather List 00:14:06.883 SGL Command Set: Supported 00:14:06.883 SGL Keyed: Supported 00:14:06.883 SGL Bit Bucket Descriptor: Not Supported 00:14:06.883 SGL Metadata Pointer: Not Supported 00:14:06.883 Oversized SGL: Not Supported 00:14:06.883 SGL Metadata Address: Not Supported 00:14:06.883 SGL Offset: Supported 00:14:06.883 Transport SGL Data Block: Not Supported 00:14:06.883 Replay Protected Memory Block: Not Supported 00:14:06.883 00:14:06.883 Firmware Slot Information 00:14:06.883 ========================= 00:14:06.883 Active slot: 0 00:14:06.883 00:14:06.883 00:14:06.883 Error Log 00:14:06.883 ========= 00:14:06.883 00:14:06.883 Active Namespaces 00:14:06.883 ================= 00:14:06.883 Discovery Log Page 00:14:06.883 ================== 00:14:06.883 Generation Counter: 2 00:14:06.883 Number of Records: 2 00:14:06.883 Record Format: 0 00:14:06.883 00:14:06.883 Discovery Log Entry 0 00:14:06.883 ---------------------- 00:14:06.883 Transport Type: 3 (TCP) 00:14:06.883 Address Family: 1 (IPv4) 00:14:06.883 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:06.883 Entry Flags: 00:14:06.883 Duplicate Returned Information: 1 00:14:06.883 Explicit Persistent Connection Support for Discovery: 1 00:14:06.883 Transport Requirements: 00:14:06.883 Secure Channel: Not Required 00:14:06.883 Port ID: 0 (0x0000) 00:14:06.883 Controller ID: 65535 (0xffff) 00:14:06.883 Admin Max SQ Size: 128 00:14:06.883 Transport Service Identifier: 4420 00:14:06.883 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:06.883 Transport Address: 10.0.0.2 00:14:06.883 Discovery Log Entry 1 00:14:06.883 ---------------------- 00:14:06.883 Transport Type: 3 (TCP) 00:14:06.883 Address Family: 1 (IPv4) 00:14:06.883 Subsystem Type: 2 (NVM Subsystem) 00:14:06.883 Entry Flags: 00:14:06.883 Duplicate Returned Information: 0 00:14:06.883 Explicit Persistent Connection Support for Discovery: 0 00:14:06.883 Transport Requirements: 00:14:06.883 Secure Channel: Not Required 00:14:06.883 Port ID: 0 (0x0000) 00:14:06.883 Controller ID: 65535 (0xffff) 00:14:06.883 Admin Max SQ Size: 128 00:14:06.883 Transport Service Identifier: 4420 00:14:06.883 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:06.883 Transport Address: 10.0.0.2 [2024-07-15 21:27:40.024895] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:06.883 [2024-07-15 21:27:40.024906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81d940) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.024912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.883 [2024-07-15 21:27:40.024918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81dac0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.024923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.883 [2024-07-15 21:27:40.024928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81dc40) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.024932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.883 [2024-07-15 21:27:40.024938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.024942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.883 [2024-07-15 21:27:40.024950] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.024954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.024958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.883 [2024-07-15 21:27:40.024964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.883 [2024-07-15 21:27:40.024980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.883 [2024-07-15 21:27:40.025015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.883 [2024-07-15 21:27:40.025021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.883 [2024-07-15 21:27:40.025025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.025035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.883 [2024-07-15 21:27:40.025049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.883 [2024-07-15 21:27:40.025064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.883 [2024-07-15 21:27:40.025109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.883 [2024-07-15 21:27:40.025115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.883 [2024-07-15 21:27:40.025118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.025127] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:06.883 [2024-07-15 21:27:40.025132] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:06.883 [2024-07-15 21:27:40.025141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.883 [2024-07-15 21:27:40.025155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.883 [2024-07-15 21:27:40.025167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.883 [2024-07-15 21:27:40.025199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.883 [2024-07-15 21:27:40.025205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.883 [2024-07-15 21:27:40.025210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.025225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.883 [2024-07-15 21:27:40.025239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.883 [2024-07-15 21:27:40.025251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.883 [2024-07-15 21:27:40.025291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.883 [2024-07-15 21:27:40.025297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.883 [2024-07-15 21:27:40.025301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.025313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025317] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.883 [2024-07-15 21:27:40.025327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.883 [2024-07-15 21:27:40.025339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.883 [2024-07-15 21:27:40.025379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.883 [2024-07-15 21:27:40.025385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.883 [2024-07-15 21:27:40.025388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.025401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.883 [2024-07-15 21:27:40.025431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.883 [2024-07-15 21:27:40.025444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.883 [2024-07-15 21:27:40.025478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.883 [2024-07-15 21:27:40.025484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.883 [2024-07-15 21:27:40.025488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.883 [2024-07-15 21:27:40.025501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.883 [2024-07-15 21:27:40.025509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.883 [2024-07-15 21:27:40.025516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.883 [2024-07-15 21:27:40.025528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.883 [2024-07-15 21:27:40.025562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.883 [2024-07-15 21:27:40.025568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.883 [2024-07-15 21:27:40.025572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.025585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.025600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.025613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.025650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.025656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.025660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.025686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.025700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.025712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.025749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.025755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.025759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.025771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.025785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.025797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.025837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.025842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.025855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.025868] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.025882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.025895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.025950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.025956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.025960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.025973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.025981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.025988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.026013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.026052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.026058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.026061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.026074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.026088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.026100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.026139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.026145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.026149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.026161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.026175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.026187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.026219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.026225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.026229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.026241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.026255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.026267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.026306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.026312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.026316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.026328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.026342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.026354] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.026393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.026399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.026403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.026415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.026430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.884 [2024-07-15 21:27:40.026442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.884 [2024-07-15 21:27:40.026474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.884 [2024-07-15 21:27:40.026480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.884 [2024-07-15 21:27:40.026484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.884 [2024-07-15 21:27:40.026496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.884 [2024-07-15 21:27:40.026504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.884 [2024-07-15 21:27:40.026510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.026522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.885 [2024-07-15 21:27:40.026561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.026567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.026571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.885 [2024-07-15 21:27:40.026583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.885 [2024-07-15 21:27:40.026597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.026609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.885 [2024-07-15 21:27:40.026650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.026655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.026659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.885 [2024-07-15 21:27:40.026671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.885 [2024-07-15 21:27:40.026685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.026697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.885 [2024-07-15 21:27:40.026737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.026743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.026746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.885 [2024-07-15 21:27:40.026759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.885 [2024-07-15 21:27:40.026773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.026785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.885 [2024-07-15 21:27:40.026821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.026827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.026831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.026835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.885 [2024-07-15 21:27:40.026843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.030835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.030840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7dc2c0) 00:14:06.885 [2024-07-15 21:27:40.030848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.030865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x81ddc0, cid 3, qid 0 00:14:06.885 [2024-07-15 21:27:40.030901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.030907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.030911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.030915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x81ddc0) on tqpair=0x7dc2c0 00:14:06.885 [2024-07-15 21:27:40.030923] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:06.885 00:14:06.885 21:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:06.885 [2024-07-15 21:27:40.076589] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:06.885 [2024-07-15 21:27:40.076637] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74197 ] 00:14:06.885 [2024-07-15 21:27:40.213379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:06.885 [2024-07-15 21:27:40.213435] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:06.885 [2024-07-15 21:27:40.213440] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:06.885 [2024-07-15 21:27:40.213452] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:06.885 [2024-07-15 21:27:40.213457] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:06.885 [2024-07-15 21:27:40.213578] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:06.885 [2024-07-15 21:27:40.213615] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e342c0 0 00:14:06.885 [2024-07-15 21:27:40.220849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:06.885 [2024-07-15 21:27:40.220872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:06.885 [2024-07-15 21:27:40.220879] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:06.885 [2024-07-15 21:27:40.220883] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:06.885 [2024-07-15 21:27:40.220933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.220939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.220944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.885 [2024-07-15 21:27:40.220958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:06.885 [2024-07-15 21:27:40.220988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.885 [2024-07-15 21:27:40.228835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.228856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.228862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.228868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.885 [2024-07-15 21:27:40.228883] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:06.885 [2024-07-15 21:27:40.228892] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:06.885 [2024-07-15 21:27:40.228899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:06.885 [2024-07-15 21:27:40.228917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.228923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.228928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.885 [2024-07-15 21:27:40.228938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.228963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.885 [2024-07-15 21:27:40.229006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.229013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.229017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.885 [2024-07-15 21:27:40.229026] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:06.885 [2024-07-15 21:27:40.229033] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:06.885 [2024-07-15 21:27:40.229040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.885 [2024-07-15 21:27:40.229054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.229069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.885 [2024-07-15 21:27:40.229106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.229112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.229115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.885 [2024-07-15 21:27:40.229124] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:06.885 [2024-07-15 21:27:40.229132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:06.885 [2024-07-15 21:27:40.229138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229145] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.885 [2024-07-15 21:27:40.229152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.229171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.885 [2024-07-15 21:27:40.229204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.229213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.229217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.885 [2024-07-15 21:27:40.229228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:06.885 [2024-07-15 21:27:40.229238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.885 [2024-07-15 21:27:40.229248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.885 [2024-07-15 21:27:40.229256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.885 [2024-07-15 21:27:40.229271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.885 [2024-07-15 21:27:40.229307] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.885 [2024-07-15 21:27:40.229313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.885 [2024-07-15 21:27:40.229318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.886 [2024-07-15 21:27:40.229329] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:06.886 [2024-07-15 21:27:40.229336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:06.886 [2024-07-15 21:27:40.229345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:06.886 [2024-07-15 21:27:40.229451] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:06.886 [2024-07-15 21:27:40.229456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:06.886 [2024-07-15 21:27:40.229465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.229482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.886 [2024-07-15 21:27:40.229496] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.886 [2024-07-15 21:27:40.229535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.886 [2024-07-15 21:27:40.229542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.886 [2024-07-15 21:27:40.229546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.886 [2024-07-15 21:27:40.229557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:06.886 [2024-07-15 21:27:40.229567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.229584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.886 [2024-07-15 21:27:40.229598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.886 [2024-07-15 21:27:40.229636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.886 [2024-07-15 21:27:40.229643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.886 [2024-07-15 21:27:40.229648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.886 [2024-07-15 21:27:40.229658] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:06.886 [2024-07-15 21:27:40.229665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.229673] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:06.886 [2024-07-15 21:27:40.229685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.229695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.229708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.886 [2024-07-15 21:27:40.229722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.886 [2024-07-15 21:27:40.229791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.886 [2024-07-15 21:27:40.229797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.886 [2024-07-15 21:27:40.229802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229807] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=4096, cccid=0 00:14:06.886 [2024-07-15 21:27:40.229813] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e75940) on tqpair(0x1e342c0): expected_datao=0, payload_size=4096 00:14:06.886 [2024-07-15 21:27:40.229828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229836] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229841] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.886 [2024-07-15 21:27:40.229857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.886 [2024-07-15 21:27:40.229862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.886 [2024-07-15 21:27:40.229875] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:06.886 [2024-07-15 21:27:40.229881] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:06.886 [2024-07-15 21:27:40.229887] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:06.886 [2024-07-15 21:27:40.229892] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:06.886 [2024-07-15 21:27:40.229898] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:06.886 [2024-07-15 21:27:40.229904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.229913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.229921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.229930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.229938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.886 [2024-07-15 21:27:40.229953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.886 [2024-07-15 21:27:40.229993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.886 [2024-07-15 21:27:40.230002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.886 [2024-07-15 21:27:40.230008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.886 [2024-07-15 21:27:40.230020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.230033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.886 [2024-07-15 21:27:40.230039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.230052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.886 [2024-07-15 21:27:40.230058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.230070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.886 [2024-07-15 21:27:40.230077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.230090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.886 [2024-07-15 21:27:40.230095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.230107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.230114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.230124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.886 [2024-07-15 21:27:40.230139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75940, cid 0, qid 0 00:14:06.886 [2024-07-15 21:27:40.230145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75ac0, cid 1, qid 0 00:14:06.886 [2024-07-15 21:27:40.230149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75c40, cid 2, qid 0 00:14:06.886 [2024-07-15 21:27:40.230154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.886 [2024-07-15 21:27:40.230158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75f40, cid 4, qid 0 00:14:06.886 [2024-07-15 21:27:40.230224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.886 [2024-07-15 21:27:40.230229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.886 [2024-07-15 21:27:40.230233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75f40) on tqpair=0x1e342c0 00:14:06.886 [2024-07-15 21:27:40.230241] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:06.886 [2024-07-15 21:27:40.230250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.230258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.230264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.230270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e342c0) 00:14:06.886 [2024-07-15 21:27:40.230284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.886 [2024-07-15 21:27:40.230297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75f40, cid 4, qid 0 00:14:06.886 [2024-07-15 21:27:40.230332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.886 [2024-07-15 21:27:40.230337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.886 [2024-07-15 21:27:40.230341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.886 [2024-07-15 21:27:40.230345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75f40) on tqpair=0x1e342c0 00:14:06.886 [2024-07-15 21:27:40.230396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:06.886 [2024-07-15 21:27:40.230405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.230422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.230435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75f40, cid 4, qid 0 00:14:06.887 [2024-07-15 21:27:40.230478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.887 [2024-07-15 21:27:40.230483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.887 [2024-07-15 21:27:40.230487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230491] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=4096, cccid=4 00:14:06.887 [2024-07-15 21:27:40.230495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e75f40) on tqpair(0x1e342c0): expected_datao=0, payload_size=4096 00:14:06.887 [2024-07-15 21:27:40.230500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230506] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230510] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.230523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.230526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75f40) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.230541] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:06.887 [2024-07-15 21:27:40.230551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.230575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.230589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75f40, cid 4, qid 0 00:14:06.887 [2024-07-15 21:27:40.230641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.887 [2024-07-15 21:27:40.230647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.887 [2024-07-15 21:27:40.230651] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230654] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=4096, cccid=4 00:14:06.887 [2024-07-15 21:27:40.230659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e75f40) on tqpair(0x1e342c0): expected_datao=0, payload_size=4096 00:14:06.887 [2024-07-15 21:27:40.230663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230669] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230673] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.230686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.230689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75f40) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.230705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.230730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.230744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75f40, cid 4, qid 0 00:14:06.887 [2024-07-15 21:27:40.230791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.887 [2024-07-15 21:27:40.230797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.887 [2024-07-15 21:27:40.230800] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230804] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=4096, cccid=4 00:14:06.887 [2024-07-15 21:27:40.230809] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e75f40) on tqpair(0x1e342c0): expected_datao=0, payload_size=4096 00:14:06.887 [2024-07-15 21:27:40.230813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230832] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.230845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.230849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75f40) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.230860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230898] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:06.887 [2024-07-15 21:27:40.230903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:06.887 [2024-07-15 21:27:40.230908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:06.887 [2024-07-15 21:27:40.230923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.230933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.230940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.230947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.230953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.887 [2024-07-15 21:27:40.230971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75f40, cid 4, qid 0 00:14:06.887 [2024-07-15 21:27:40.230976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e760c0, cid 5, qid 0 00:14:06.887 [2024-07-15 21:27:40.231022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.231028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.231032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75f40) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.231042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.231047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.231051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e760c0) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.231064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.231074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.231087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e760c0, cid 5, qid 0 00:14:06.887 [2024-07-15 21:27:40.231124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.231133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.231139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e760c0) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.231160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.231177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.231199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e760c0, cid 5, qid 0 00:14:06.887 [2024-07-15 21:27:40.231233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.231241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.231247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e760c0) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.231266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.231282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.231301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e760c0, cid 5, qid 0 00:14:06.887 [2024-07-15 21:27:40.231343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.887 [2024-07-15 21:27:40.231351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.887 [2024-07-15 21:27:40.231357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231363] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e760c0) on tqpair=0x1e342c0 00:14:06.887 [2024-07-15 21:27:40.231390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e342c0) 00:14:06.887 [2024-07-15 21:27:40.231407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.887 [2024-07-15 21:27:40.231419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.887 [2024-07-15 21:27:40.231425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e342c0) 00:14:06.888 [2024-07-15 21:27:40.231435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.888 [2024-07-15 21:27:40.231447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e342c0) 00:14:06.888 [2024-07-15 21:27:40.231462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.888 [2024-07-15 21:27:40.231478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e342c0) 00:14:06.888 [2024-07-15 21:27:40.231495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.888 [2024-07-15 21:27:40.231519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e760c0, cid 5, qid 0 00:14:06.888 [2024-07-15 21:27:40.231528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75f40, cid 4, qid 0 00:14:06.888 [2024-07-15 21:27:40.231535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e76240, cid 6, qid 0 00:14:06.888 [2024-07-15 21:27:40.231542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e763c0, cid 7, qid 0 00:14:06.888 [2024-07-15 21:27:40.231644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.888 [2024-07-15 21:27:40.231652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.888 [2024-07-15 21:27:40.231657] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231662] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=8192, cccid=5 00:14:06.888 [2024-07-15 21:27:40.231668] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e760c0) on tqpair(0x1e342c0): expected_datao=0, payload_size=8192 00:14:06.888 [2024-07-15 21:27:40.231674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231689] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231694] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.888 [2024-07-15 21:27:40.231707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.888 [2024-07-15 21:27:40.231712] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231717] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=512, cccid=4 00:14:06.888 [2024-07-15 21:27:40.231723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e75f40) on tqpair(0x1e342c0): expected_datao=0, payload_size=512 00:14:06.888 [2024-07-15 21:27:40.231728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231735] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231740] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.888 [2024-07-15 21:27:40.231752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.888 [2024-07-15 21:27:40.231757] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231762] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=512, cccid=6 00:14:06.888 [2024-07-15 21:27:40.231768] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e76240) on tqpair(0x1e342c0): expected_datao=0, payload_size=512 00:14:06.888 [2024-07-15 21:27:40.231773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231780] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231784] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:06.888 [2024-07-15 21:27:40.231797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:06.888 [2024-07-15 21:27:40.231801] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231806] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e342c0): datao=0, datal=4096, cccid=7 00:14:06.888 [2024-07-15 21:27:40.231812] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e763c0) on tqpair(0x1e342c0): expected_datao=0, payload_size=4096 00:14:06.888 [2024-07-15 21:27:40.231828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231836] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231841] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.888 [2024-07-15 21:27:40.231853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.888 [2024-07-15 21:27:40.231858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e760c0) on tqpair=0x1e342c0 00:14:06.888 [2024-07-15 21:27:40.231882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.888 [2024-07-15 21:27:40.231889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.888 [2024-07-15 21:27:40.231893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75f40) on tqpair=0x1e342c0 00:14:06.888 [2024-07-15 21:27:40.231912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.888 [2024-07-15 21:27:40.231918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.888 [2024-07-15 21:27:40.231923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e76240) on tqpair=0x1e342c0 00:14:06.888 [2024-07-15 21:27:40.231936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.888 [2024-07-15 21:27:40.231942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.888 [2024-07-15 21:27:40.231947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.888 [2024-07-15 21:27:40.231951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e763c0) on tqpair=0x1e342c0 00:14:06.888 ===================================================== 00:14:06.888 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.888 ===================================================== 00:14:06.888 Controller Capabilities/Features 00:14:06.888 ================================ 00:14:06.888 Vendor ID: 8086 00:14:06.888 Subsystem Vendor ID: 8086 00:14:06.888 Serial Number: SPDK00000000000001 00:14:06.888 Model Number: SPDK bdev Controller 00:14:06.888 Firmware Version: 24.09 00:14:06.888 Recommended Arb Burst: 6 00:14:06.888 IEEE OUI Identifier: e4 d2 5c 00:14:06.888 Multi-path I/O 00:14:06.888 May have multiple subsystem ports: Yes 00:14:06.888 May have multiple controllers: Yes 00:14:06.888 Associated with SR-IOV VF: No 00:14:06.888 Max Data Transfer Size: 131072 00:14:06.888 Max Number of Namespaces: 32 00:14:06.888 Max Number of I/O Queues: 127 00:14:06.888 NVMe Specification Version (VS): 1.3 00:14:06.888 NVMe Specification Version (Identify): 1.3 00:14:06.888 Maximum Queue Entries: 128 00:14:06.888 Contiguous Queues Required: Yes 00:14:06.888 Arbitration Mechanisms Supported 00:14:06.888 Weighted Round Robin: Not Supported 00:14:06.888 Vendor Specific: Not Supported 00:14:06.888 Reset Timeout: 15000 ms 00:14:06.888 Doorbell Stride: 4 bytes 00:14:06.888 NVM Subsystem Reset: Not Supported 00:14:06.888 Command Sets Supported 00:14:06.888 NVM Command Set: Supported 00:14:06.888 Boot Partition: Not Supported 00:14:06.888 Memory Page Size Minimum: 4096 bytes 00:14:06.888 Memory Page Size Maximum: 4096 bytes 00:14:06.888 Persistent Memory Region: Not Supported 00:14:06.888 Optional Asynchronous Events Supported 00:14:06.888 Namespace Attribute Notices: Supported 00:14:06.888 Firmware Activation Notices: Not Supported 00:14:06.888 ANA Change Notices: Not Supported 00:14:06.888 PLE Aggregate Log Change Notices: Not Supported 00:14:06.888 LBA Status Info Alert Notices: Not Supported 00:14:06.888 EGE Aggregate Log Change Notices: Not Supported 00:14:06.888 Normal NVM Subsystem Shutdown event: Not Supported 00:14:06.888 Zone Descriptor Change Notices: Not Supported 00:14:06.888 Discovery Log Change Notices: Not Supported 00:14:06.888 Controller Attributes 00:14:06.888 128-bit Host Identifier: Supported 00:14:06.888 Non-Operational Permissive Mode: Not Supported 00:14:06.888 NVM Sets: Not Supported 00:14:06.888 Read Recovery Levels: Not Supported 00:14:06.888 Endurance Groups: Not Supported 00:14:06.888 Predictable Latency Mode: Not Supported 00:14:06.888 Traffic Based Keep ALive: Not Supported 00:14:06.888 Namespace Granularity: Not Supported 00:14:06.888 SQ Associations: Not Supported 00:14:06.888 UUID List: Not Supported 00:14:06.888 Multi-Domain Subsystem: Not Supported 00:14:06.888 Fixed Capacity Management: Not Supported 00:14:06.888 Variable Capacity Management: Not Supported 00:14:06.888 Delete Endurance Group: Not Supported 00:14:06.888 Delete NVM Set: Not Supported 00:14:06.888 Extended LBA Formats Supported: Not Supported 00:14:06.888 Flexible Data Placement Supported: Not Supported 00:14:06.888 00:14:06.888 Controller Memory Buffer Support 00:14:06.888 ================================ 00:14:06.888 Supported: No 00:14:06.888 00:14:06.888 Persistent Memory Region Support 00:14:06.888 ================================ 00:14:06.888 Supported: No 00:14:06.888 00:14:06.888 Admin Command Set Attributes 00:14:06.889 ============================ 00:14:06.889 Security Send/Receive: Not Supported 00:14:06.889 Format NVM: Not Supported 00:14:06.889 Firmware Activate/Download: Not Supported 00:14:06.889 Namespace Management: Not Supported 00:14:06.889 Device Self-Test: Not Supported 00:14:06.889 Directives: Not Supported 00:14:06.889 NVMe-MI: Not Supported 00:14:06.889 Virtualization Management: Not Supported 00:14:06.889 Doorbell Buffer Config: Not Supported 00:14:06.889 Get LBA Status Capability: Not Supported 00:14:06.889 Command & Feature Lockdown Capability: Not Supported 00:14:06.889 Abort Command Limit: 4 00:14:06.889 Async Event Request Limit: 4 00:14:06.889 Number of Firmware Slots: N/A 00:14:06.889 Firmware Slot 1 Read-Only: N/A 00:14:06.889 Firmware Activation Without Reset: N/A 00:14:06.889 Multiple Update Detection Support: N/A 00:14:06.889 Firmware Update Granularity: No Information Provided 00:14:06.889 Per-Namespace SMART Log: No 00:14:06.889 Asymmetric Namespace Access Log Page: Not Supported 00:14:06.889 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:06.889 Command Effects Log Page: Supported 00:14:06.889 Get Log Page Extended Data: Supported 00:14:06.889 Telemetry Log Pages: Not Supported 00:14:06.889 Persistent Event Log Pages: Not Supported 00:14:06.889 Supported Log Pages Log Page: May Support 00:14:06.889 Commands Supported & Effects Log Page: Not Supported 00:14:06.889 Feature Identifiers & Effects Log Page:May Support 00:14:06.889 NVMe-MI Commands & Effects Log Page: May Support 00:14:06.889 Data Area 4 for Telemetry Log: Not Supported 00:14:06.889 Error Log Page Entries Supported: 128 00:14:06.889 Keep Alive: Supported 00:14:06.889 Keep Alive Granularity: 10000 ms 00:14:06.889 00:14:06.889 NVM Command Set Attributes 00:14:06.889 ========================== 00:14:06.889 Submission Queue Entry Size 00:14:06.889 Max: 64 00:14:06.889 Min: 64 00:14:06.889 Completion Queue Entry Size 00:14:06.889 Max: 16 00:14:06.889 Min: 16 00:14:06.889 Number of Namespaces: 32 00:14:06.889 Compare Command: Supported 00:14:06.889 Write Uncorrectable Command: Not Supported 00:14:06.889 Dataset Management Command: Supported 00:14:06.889 Write Zeroes Command: Supported 00:14:06.889 Set Features Save Field: Not Supported 00:14:06.889 Reservations: Supported 00:14:06.889 Timestamp: Not Supported 00:14:06.889 Copy: Supported 00:14:06.889 Volatile Write Cache: Present 00:14:06.889 Atomic Write Unit (Normal): 1 00:14:06.889 Atomic Write Unit (PFail): 1 00:14:06.889 Atomic Compare & Write Unit: 1 00:14:06.889 Fused Compare & Write: Supported 00:14:06.889 Scatter-Gather List 00:14:06.889 SGL Command Set: Supported 00:14:06.889 SGL Keyed: Supported 00:14:06.889 SGL Bit Bucket Descriptor: Not Supported 00:14:06.889 SGL Metadata Pointer: Not Supported 00:14:06.889 Oversized SGL: Not Supported 00:14:06.889 SGL Metadata Address: Not Supported 00:14:06.889 SGL Offset: Supported 00:14:06.889 Transport SGL Data Block: Not Supported 00:14:06.889 Replay Protected Memory Block: Not Supported 00:14:06.889 00:14:06.889 Firmware Slot Information 00:14:06.889 ========================= 00:14:06.889 Active slot: 1 00:14:06.889 Slot 1 Firmware Revision: 24.09 00:14:06.889 00:14:06.889 00:14:06.889 Commands Supported and Effects 00:14:06.889 ============================== 00:14:06.889 Admin Commands 00:14:06.889 -------------- 00:14:06.889 Get Log Page (02h): Supported 00:14:06.889 Identify (06h): Supported 00:14:06.889 Abort (08h): Supported 00:14:06.889 Set Features (09h): Supported 00:14:06.889 Get Features (0Ah): Supported 00:14:06.889 Asynchronous Event Request (0Ch): Supported 00:14:06.889 Keep Alive (18h): Supported 00:14:06.889 I/O Commands 00:14:06.889 ------------ 00:14:06.889 Flush (00h): Supported LBA-Change 00:14:06.889 Write (01h): Supported LBA-Change 00:14:06.889 Read (02h): Supported 00:14:06.889 Compare (05h): Supported 00:14:06.889 Write Zeroes (08h): Supported LBA-Change 00:14:06.889 Dataset Management (09h): Supported LBA-Change 00:14:06.889 Copy (19h): Supported LBA-Change 00:14:06.889 00:14:06.889 Error Log 00:14:06.889 ========= 00:14:06.889 00:14:06.889 Arbitration 00:14:06.889 =========== 00:14:06.889 Arbitration Burst: 1 00:14:06.889 00:14:06.889 Power Management 00:14:06.889 ================ 00:14:06.889 Number of Power States: 1 00:14:06.889 Current Power State: Power State #0 00:14:06.889 Power State #0: 00:14:06.889 Max Power: 0.00 W 00:14:06.889 Non-Operational State: Operational 00:14:06.889 Entry Latency: Not Reported 00:14:06.889 Exit Latency: Not Reported 00:14:06.889 Relative Read Throughput: 0 00:14:06.889 Relative Read Latency: 0 00:14:06.889 Relative Write Throughput: 0 00:14:06.889 Relative Write Latency: 0 00:14:06.889 Idle Power: Not Reported 00:14:06.889 Active Power: Not Reported 00:14:06.889 Non-Operational Permissive Mode: Not Supported 00:14:06.889 00:14:06.889 Health Information 00:14:06.889 ================== 00:14:06.889 Critical Warnings: 00:14:06.889 Available Spare Space: OK 00:14:06.889 Temperature: OK 00:14:06.889 Device Reliability: OK 00:14:06.889 Read Only: No 00:14:06.889 Volatile Memory Backup: OK 00:14:06.889 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:06.889 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:06.889 Available Spare: 0% 00:14:06.889 Available Spare Threshold: 0% 00:14:06.889 Life Percentage Used:[2024-07-15 21:27:40.232072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e342c0) 00:14:06.889 [2024-07-15 21:27:40.232087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.889 [2024-07-15 21:27:40.232107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e763c0, cid 7, qid 0 00:14:06.889 [2024-07-15 21:27:40.232146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.889 [2024-07-15 21:27:40.232153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.889 [2024-07-15 21:27:40.232158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e763c0) on tqpair=0x1e342c0 00:14:06.889 [2024-07-15 21:27:40.232200] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:06.889 [2024-07-15 21:27:40.232211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75940) on tqpair=0x1e342c0 00:14:06.889 [2024-07-15 21:27:40.232218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.889 [2024-07-15 21:27:40.232225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75ac0) on tqpair=0x1e342c0 00:14:06.889 [2024-07-15 21:27:40.232231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.889 [2024-07-15 21:27:40.232237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75c40) on tqpair=0x1e342c0 00:14:06.889 [2024-07-15 21:27:40.232243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.889 [2024-07-15 21:27:40.232249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.889 [2024-07-15 21:27:40.232255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.889 [2024-07-15 21:27:40.232264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232274] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.889 [2024-07-15 21:27:40.232281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.889 [2024-07-15 21:27:40.232297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.889 [2024-07-15 21:27:40.232336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.889 [2024-07-15 21:27:40.232343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.889 [2024-07-15 21:27:40.232348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.889 [2024-07-15 21:27:40.232360] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.889 [2024-07-15 21:27:40.232377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.889 [2024-07-15 21:27:40.232393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.889 [2024-07-15 21:27:40.232448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.889 [2024-07-15 21:27:40.232454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.889 [2024-07-15 21:27:40.232459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.889 [2024-07-15 21:27:40.232469] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:06.889 [2024-07-15 21:27:40.232475] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:06.889 [2024-07-15 21:27:40.232485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.889 [2024-07-15 21:27:40.232502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.889 [2024-07-15 21:27:40.232515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.889 [2024-07-15 21:27:40.232562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.889 [2024-07-15 21:27:40.232569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.889 [2024-07-15 21:27:40.232574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.889 [2024-07-15 21:27:40.232595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.890 [2024-07-15 21:27:40.232606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.890 [2024-07-15 21:27:40.232624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.890 [2024-07-15 21:27:40.232639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.890 [2024-07-15 21:27:40.232674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.890 [2024-07-15 21:27:40.232681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.890 [2024-07-15 21:27:40.232686] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.890 [2024-07-15 21:27:40.232701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.890 [2024-07-15 21:27:40.232719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.890 [2024-07-15 21:27:40.232733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.890 [2024-07-15 21:27:40.232768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.890 [2024-07-15 21:27:40.232776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.890 [2024-07-15 21:27:40.232780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.890 [2024-07-15 21:27:40.232796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.232806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.890 [2024-07-15 21:27:40.232814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.890 [2024-07-15 21:27:40.232828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.890 [2024-07-15 21:27:40.236855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.890 [2024-07-15 21:27:40.236872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.890 [2024-07-15 21:27:40.236878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.236883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.890 [2024-07-15 21:27:40.236898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.236904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.236909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e342c0) 00:14:06.890 [2024-07-15 21:27:40.236918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:06.890 [2024-07-15 21:27:40.236942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e75dc0, cid 3, qid 0 00:14:06.890 [2024-07-15 21:27:40.236981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:06.890 [2024-07-15 21:27:40.236990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:06.890 [2024-07-15 21:27:40.236998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:06.890 [2024-07-15 21:27:40.237005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e75dc0) on tqpair=0x1e342c0 00:14:06.890 [2024-07-15 21:27:40.237015] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:14:07.148 0% 00:14:07.148 Data Units Read: 0 00:14:07.148 Data Units Written: 0 00:14:07.148 Host Read Commands: 0 00:14:07.148 Host Write Commands: 0 00:14:07.148 Controller Busy Time: 0 minutes 00:14:07.148 Power Cycles: 0 00:14:07.148 Power On Hours: 0 hours 00:14:07.148 Unsafe Shutdowns: 0 00:14:07.148 Unrecoverable Media Errors: 0 00:14:07.148 Lifetime Error Log Entries: 0 00:14:07.148 Warning Temperature Time: 0 minutes 00:14:07.148 Critical Temperature Time: 0 minutes 00:14:07.148 00:14:07.148 Number of Queues 00:14:07.148 ================ 00:14:07.148 Number of I/O Submission Queues: 127 00:14:07.148 Number of I/O Completion Queues: 127 00:14:07.148 00:14:07.148 Active Namespaces 00:14:07.148 ================= 00:14:07.148 Namespace ID:1 00:14:07.148 Error Recovery Timeout: Unlimited 00:14:07.148 Command Set Identifier: NVM (00h) 00:14:07.148 Deallocate: Supported 00:14:07.148 Deallocated/Unwritten Error: Not Supported 00:14:07.148 Deallocated Read Value: Unknown 00:14:07.148 Deallocate in Write Zeroes: Not Supported 00:14:07.148 Deallocated Guard Field: 0xFFFF 00:14:07.148 Flush: Supported 00:14:07.148 Reservation: Supported 00:14:07.148 Namespace Sharing Capabilities: Multiple Controllers 00:14:07.148 Size (in LBAs): 131072 (0GiB) 00:14:07.148 Capacity (in LBAs): 131072 (0GiB) 00:14:07.148 Utilization (in LBAs): 131072 (0GiB) 00:14:07.148 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:07.148 EUI64: ABCDEF0123456789 00:14:07.148 UUID: 1029dab6-39bb-4b1b-b9c2-7ae94780c5fd 00:14:07.148 Thin Provisioning: Not Supported 00:14:07.148 Per-NS Atomic Units: Yes 00:14:07.148 Atomic Boundary Size (Normal): 0 00:14:07.148 Atomic Boundary Size (PFail): 0 00:14:07.148 Atomic Boundary Offset: 0 00:14:07.148 Maximum Single Source Range Length: 65535 00:14:07.148 Maximum Copy Length: 65535 00:14:07.148 Maximum Source Range Count: 1 00:14:07.148 NGUID/EUI64 Never Reused: No 00:14:07.148 Namespace Write Protected: No 00:14:07.148 Number of LBA Formats: 1 00:14:07.148 Current LBA Format: LBA Format #00 00:14:07.148 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:07.148 00:14:07.148 21:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:07.148 21:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.148 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.148 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.148 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.149 rmmod nvme_tcp 00:14:07.149 rmmod nvme_fabrics 00:14:07.149 rmmod nvme_keyring 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74160 ']' 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74160 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 74160 ']' 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 74160 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74160 00:14:07.149 killing process with pid 74160 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74160' 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 74160 00:14:07.149 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 74160 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:07.407 00:14:07.407 real 0m2.537s 00:14:07.407 user 0m6.498s 00:14:07.407 sys 0m0.779s 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.407 ************************************ 00:14:07.407 21:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:07.407 END TEST nvmf_identify 00:14:07.407 ************************************ 00:14:07.407 21:27:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.407 21:27:40 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:07.407 21:27:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.407 21:27:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.407 21:27:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.667 ************************************ 00:14:07.667 START TEST nvmf_perf 00:14:07.667 ************************************ 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:07.667 * Looking for test storage... 00:14:07.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.667 21:27:40 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:07.668 21:27:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:07.668 Cannot find device "nvmf_tgt_br" 00:14:07.668 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:07.668 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.668 Cannot find device "nvmf_tgt_br2" 00:14:07.668 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:07.668 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:07.668 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:07.928 Cannot find device "nvmf_tgt_br" 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:07.928 Cannot find device "nvmf_tgt_br2" 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.928 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:08.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:08.188 00:14:08.188 --- 10.0.0.2 ping statistics --- 00:14:08.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.188 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:08.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:14:08.188 00:14:08.188 --- 10.0.0.3 ping statistics --- 00:14:08.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.188 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:08.188 00:14:08.188 --- 10.0.0.1 ping statistics --- 00:14:08.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.188 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74362 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74362 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 74362 ']' 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.188 21:27:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:08.188 [2024-07-15 21:27:41.393380] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:08.188 [2024-07-15 21:27:41.393445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.188 [2024-07-15 21:27:41.539214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.448 [2024-07-15 21:27:41.627395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.448 [2024-07-15 21:27:41.627434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.448 [2024-07-15 21:27:41.627443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.448 [2024-07-15 21:27:41.627452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.448 [2024-07-15 21:27:41.627458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.448 [2024-07-15 21:27:41.627572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.448 [2024-07-15 21:27:41.627752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.448 [2024-07-15 21:27:41.628457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.448 [2024-07-15 21:27:41.628458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.448 [2024-07-15 21:27:41.669669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:09.014 21:27:42 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:09.272 21:27:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:09.272 21:27:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:09.530 21:27:42 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:09.530 21:27:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:09.788 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:09.788 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:09.788 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:09.788 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:09.788 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:10.049 [2024-07-15 21:27:43.210554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.049 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.306 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:10.306 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.306 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:10.306 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:10.563 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.821 [2024-07-15 21:27:43.962348] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.821 21:27:43 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.821 21:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:10.821 21:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:10.821 21:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:10.821 21:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:12.195 Initializing NVMe Controllers 00:14:12.195 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:12.195 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:12.195 Initialization complete. Launching workers. 00:14:12.195 ======================================================== 00:14:12.195 Latency(us) 00:14:12.195 Device Information : IOPS MiB/s Average min max 00:14:12.195 PCIE (0000:00:10.0) NSID 1 from core 0: 19330.00 75.51 1655.77 225.38 7553.83 00:14:12.195 ======================================================== 00:14:12.195 Total : 19330.00 75.51 1655.77 225.38 7553.83 00:14:12.195 00:14:12.195 21:27:45 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:13.568 Initializing NVMe Controllers 00:14:13.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:13.568 Initialization complete. Launching workers. 00:14:13.568 ======================================================== 00:14:13.568 Latency(us) 00:14:13.568 Device Information : IOPS MiB/s Average min max 00:14:13.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5037.71 19.68 198.30 76.56 6059.47 00:14:13.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.80 0.48 8141.76 5991.35 12027.18 00:14:13.568 ======================================================== 00:14:13.568 Total : 5161.51 20.16 388.82 76.56 12027.18 00:14:13.568 00:14:13.568 21:27:46 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:14.501 Initializing NVMe Controllers 00:14:14.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:14.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:14.501 Initialization complete. Launching workers. 00:14:14.501 ======================================================== 00:14:14.501 Latency(us) 00:14:14.501 Device Information : IOPS MiB/s Average min max 00:14:14.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11237.99 43.90 2847.95 491.48 6472.95 00:14:14.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.00 15.62 8043.62 6972.61 12674.68 00:14:14.501 ======================================================== 00:14:14.501 Total : 15237.98 59.52 4211.82 491.48 12674.68 00:14:14.501 00:14:14.759 21:27:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:14.759 21:27:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:17.289 Initializing NVMe Controllers 00:14:17.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.289 Controller IO queue size 128, less than required. 00:14:17.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.289 Controller IO queue size 128, less than required. 00:14:17.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:17.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:17.289 Initialization complete. Launching workers. 00:14:17.289 ======================================================== 00:14:17.289 Latency(us) 00:14:17.289 Device Information : IOPS MiB/s Average min max 00:14:17.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2214.73 553.68 58610.88 31184.46 91306.74 00:14:17.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 663.27 165.82 200565.98 45743.21 327851.93 00:14:17.289 ======================================================== 00:14:17.289 Total : 2878.00 719.50 91326.13 31184.46 327851.93 00:14:17.289 00:14:17.289 21:27:50 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:17.548 Initializing NVMe Controllers 00:14:17.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.548 Controller IO queue size 128, less than required. 00:14:17.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:17.548 Controller IO queue size 128, less than required. 00:14:17.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:17.548 WARNING: Some requested NVMe devices were skipped 00:14:17.548 No valid NVMe controllers or AIO or URING devices found 00:14:17.548 21:27:50 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:20.103 Initializing NVMe Controllers 00:14:20.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.103 Controller IO queue size 128, less than required. 00:14:20.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.103 Controller IO queue size 128, less than required. 00:14:20.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:20.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:20.103 Initialization complete. Launching workers. 00:14:20.103 00:14:20.103 ==================== 00:14:20.103 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:20.103 TCP transport: 00:14:20.103 polls: 12349 00:14:20.103 idle_polls: 7552 00:14:20.103 sock_completions: 4797 00:14:20.103 nvme_completions: 8317 00:14:20.103 submitted_requests: 12498 00:14:20.103 queued_requests: 1 00:14:20.103 00:14:20.103 ==================== 00:14:20.103 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:20.103 TCP transport: 00:14:20.103 polls: 14523 00:14:20.103 idle_polls: 9467 00:14:20.103 sock_completions: 5056 00:14:20.103 nvme_completions: 8189 00:14:20.103 submitted_requests: 12284 00:14:20.103 queued_requests: 1 00:14:20.103 ======================================================== 00:14:20.103 Latency(us) 00:14:20.103 Device Information : IOPS MiB/s Average min max 00:14:20.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2078.58 519.64 63019.65 32565.39 103145.97 00:14:20.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2046.59 511.65 62741.17 27728.10 120396.53 00:14:20.103 ======================================================== 00:14:20.103 Total : 4125.16 1031.29 62881.49 27728.10 120396.53 00:14:20.103 00:14:20.103 21:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:20.103 21:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.361 rmmod nvme_tcp 00:14:20.361 rmmod nvme_fabrics 00:14:20.361 rmmod nvme_keyring 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74362 ']' 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74362 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 74362 ']' 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 74362 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74362 00:14:20.361 killing process with pid 74362 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74362' 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 74362 00:14:20.361 21:27:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 74362 00:14:21.296 21:27:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.296 21:27:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.296 21:27:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.296 21:27:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.296 21:27:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.296 21:27:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.296 21:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.297 21:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.297 21:27:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:21.297 00:14:21.297 real 0m13.594s 00:14:21.297 user 0m48.693s 00:14:21.297 sys 0m4.273s 00:14:21.297 ************************************ 00:14:21.297 END TEST nvmf_perf 00:14:21.297 ************************************ 00:14:21.297 21:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.297 21:27:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:21.297 21:27:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:21.297 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:21.297 21:27:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:21.297 21:27:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.297 21:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.297 ************************************ 00:14:21.297 START TEST nvmf_fio_host 00:14:21.297 ************************************ 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:21.297 * Looking for test storage... 00:14:21.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:21.297 Cannot find device "nvmf_tgt_br" 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.297 Cannot find device "nvmf_tgt_br2" 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:21.297 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:21.556 Cannot find device "nvmf_tgt_br" 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:21.556 Cannot find device "nvmf_tgt_br2" 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:21.556 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.815 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.815 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.815 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.815 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.815 21:27:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:21.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:21.815 00:14:21.815 --- 10.0.0.2 ping statistics --- 00:14:21.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.815 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:21.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:21.815 00:14:21.815 --- 10.0.0.3 ping statistics --- 00:14:21.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.815 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:21.815 00:14:21.815 --- 10.0.0.1 ping statistics --- 00:14:21.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.815 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74773 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74773 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 74773 ']' 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.815 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:21.815 [2024-07-15 21:27:55.094195] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:21.815 [2024-07-15 21:27:55.094726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.073 [2024-07-15 21:27:55.230449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.074 [2024-07-15 21:27:55.331397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.074 [2024-07-15 21:27:55.331445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.074 [2024-07-15 21:27:55.331455] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.074 [2024-07-15 21:27:55.331463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.074 [2024-07-15 21:27:55.331470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.074 [2024-07-15 21:27:55.331646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.074 [2024-07-15 21:27:55.331870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.074 [2024-07-15 21:27:55.332775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.074 [2024-07-15 21:27:55.332775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.074 [2024-07-15 21:27:55.375603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.641 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.641 21:27:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:22.641 21:27:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.899 [2024-07-15 21:27:56.133765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.899 21:27:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:22.899 21:27:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.899 21:27:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.900 21:27:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:23.158 Malloc1 00:14:23.158 21:27:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:23.415 21:27:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.674 21:27:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.933 [2024-07-15 21:27:57.096025] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.933 21:27:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:24.191 21:27:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:24.191 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:24.191 fio-3.35 00:14:24.191 Starting 1 thread 00:14:26.773 00:14:26.773 test: (groupid=0, jobs=1): err= 0: pid=74856: Mon Jul 15 21:27:59 2024 00:14:26.773 read: IOPS=10.8k, BW=42.2MiB/s (44.2MB/s)(84.6MiB/2006msec) 00:14:26.773 slat (nsec): min=1566, max=178990, avg=1817.28, stdev=1631.90 00:14:26.773 clat (usec): min=1282, max=11123, avg=6196.30, stdev=443.75 00:14:26.773 lat (usec): min=1306, max=11125, avg=6198.11, stdev=443.60 00:14:26.773 clat percentiles (usec): 00:14:26.773 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5866], 00:14:26.773 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6259], 00:14:26.773 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6652], 95.00th=[ 6849], 00:14:26.773 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 9765], 99.95th=[10290], 00:14:26.773 | 99.99th=[10683] 00:14:26.773 bw ( KiB/s): min=42860, max=43624, per=99.88%, avg=43123.00, stdev=340.84, samples=4 00:14:26.773 iops : min=10715, max=10906, avg=10780.75, stdev=85.21, samples=4 00:14:26.773 write: IOPS=10.8k, BW=42.1MiB/s (44.1MB/s)(84.4MiB/2006msec); 0 zone resets 00:14:26.773 slat (nsec): min=1600, max=106655, avg=1906.06, stdev=980.64 00:14:26.774 clat (usec): min=1192, max=10514, avg=5633.39, stdev=400.12 00:14:26.774 lat (usec): min=1199, max=10516, avg=5635.29, stdev=400.06 00:14:26.774 clat percentiles (usec): 00:14:26.774 | 1.00th=[ 4817], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5342], 00:14:26.774 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5735], 00:14:26.774 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6194], 00:14:26.774 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 8455], 99.95th=[ 9896], 00:14:26.774 | 99.99th=[10421] 00:14:26.774 bw ( KiB/s): min=42693, max=43336, per=99.94%, avg=43059.25, stdev=278.79, samples=4 00:14:26.774 iops : min=10673, max=10834, avg=10764.75, stdev=69.81, samples=4 00:14:26.774 lat (msec) : 2=0.07%, 4=0.14%, 10=99.75%, 20=0.05% 00:14:26.774 cpu : usr=67.48%, sys=25.44%, ctx=11, majf=0, minf=6 00:14:26.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:26.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:26.774 issued rwts: total=21653,21608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:26.774 00:14:26.774 Run status group 0 (all jobs): 00:14:26.774 READ: bw=42.2MiB/s (44.2MB/s), 42.2MiB/s-42.2MiB/s (44.2MB/s-44.2MB/s), io=84.6MiB (88.7MB), run=2006-2006msec 00:14:26.774 WRITE: bw=42.1MiB/s (44.1MB/s), 42.1MiB/s-42.1MiB/s (44.1MB/s-44.1MB/s), io=84.4MiB (88.5MB), run=2006-2006msec 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:26.774 21:27:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:26.774 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:26.774 fio-3.35 00:14:26.774 Starting 1 thread 00:14:29.300 00:14:29.300 test: (groupid=0, jobs=1): err= 0: pid=74900: Mon Jul 15 21:28:02 2024 00:14:29.300 read: IOPS=10.1k, BW=157MiB/s (165MB/s)(315MiB/2005msec) 00:14:29.300 slat (usec): min=2, max=104, avg= 3.01, stdev= 1.72 00:14:29.300 clat (usec): min=1906, max=14006, avg=7071.93, stdev=2133.74 00:14:29.300 lat (usec): min=1909, max=14008, avg=7074.94, stdev=2133.89 00:14:29.300 clat percentiles (usec): 00:14:29.300 | 1.00th=[ 3326], 5.00th=[ 4047], 10.00th=[ 4490], 20.00th=[ 5145], 00:14:29.300 | 30.00th=[ 5735], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7373], 00:14:29.300 | 70.00th=[ 8094], 80.00th=[ 8848], 90.00th=[10028], 95.00th=[10945], 00:14:29.300 | 99.00th=[12649], 99.50th=[13304], 99.90th=[13698], 99.95th=[13829], 00:14:29.300 | 99.99th=[13960] 00:14:29.300 bw ( KiB/s): min=73120, max=89344, per=49.98%, avg=80512.00, stdev=7656.27, samples=4 00:14:29.300 iops : min= 4570, max= 5584, avg=5032.00, stdev=478.52, samples=4 00:14:29.300 write: IOPS=5944, BW=92.9MiB/s (97.4MB/s)(165MiB/1777msec); 0 zone resets 00:14:29.300 slat (usec): min=28, max=749, avg=33.14, stdev=13.52 00:14:29.300 clat (usec): min=2515, max=17450, avg=9932.22, stdev=1761.15 00:14:29.300 lat (usec): min=2544, max=17562, avg=9965.36, stdev=1763.35 00:14:29.300 clat percentiles (usec): 00:14:29.300 | 1.00th=[ 6652], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8455], 00:14:29.300 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:14:29.300 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12387], 95.00th=[13173], 00:14:29.300 | 99.00th=[14877], 99.50th=[16188], 99.90th=[17171], 99.95th=[17171], 00:14:29.300 | 99.99th=[17433] 00:14:29.300 bw ( KiB/s): min=77856, max=91520, per=88.34%, avg=84016.00, stdev=6990.69, samples=4 00:14:29.300 iops : min= 4866, max= 5720, avg=5251.00, stdev=436.92, samples=4 00:14:29.300 lat (msec) : 2=0.01%, 4=2.92%, 10=75.60%, 20=21.47% 00:14:29.300 cpu : usr=80.29%, sys=15.22%, ctx=17, majf=0, minf=16 00:14:29.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:29.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:29.300 issued rwts: total=20185,10563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:29.300 00:14:29.300 Run status group 0 (all jobs): 00:14:29.300 READ: bw=157MiB/s (165MB/s), 157MiB/s-157MiB/s (165MB/s-165MB/s), io=315MiB (331MB), run=2005-2005msec 00:14:29.300 WRITE: bw=92.9MiB/s (97.4MB/s), 92.9MiB/s-92.9MiB/s (97.4MB/s-97.4MB/s), io=165MiB (173MB), run=1777-1777msec 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.300 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.300 rmmod nvme_tcp 00:14:29.300 rmmod nvme_fabrics 00:14:29.559 rmmod nvme_keyring 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74773 ']' 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74773 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 74773 ']' 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 74773 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74773 00:14:29.559 killing process with pid 74773 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74773' 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 74773 00:14:29.559 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 74773 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.817 21:28:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.817 21:28:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:29.817 ************************************ 00:14:29.817 END TEST nvmf_fio_host 00:14:29.817 ************************************ 00:14:29.817 00:14:29.817 real 0m8.632s 00:14:29.817 user 0m34.197s 00:14:29.817 sys 0m2.757s 00:14:29.817 21:28:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.817 21:28:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:29.817 21:28:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:29.817 21:28:03 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:29.817 21:28:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:29.817 21:28:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.817 21:28:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:29.817 ************************************ 00:14:29.817 START TEST nvmf_failover 00:14:29.817 ************************************ 00:14:29.817 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:30.084 * Looking for test storage... 00:14:30.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:30.084 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:30.085 Cannot find device "nvmf_tgt_br" 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.085 Cannot find device "nvmf_tgt_br2" 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:30.085 Cannot find device "nvmf_tgt_br" 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:30.085 Cannot find device "nvmf_tgt_br2" 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:30.085 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:30.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:14:30.345 00:14:30.345 --- 10.0.0.2 ping statistics --- 00:14:30.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.345 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:30.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:14:30.345 00:14:30.345 --- 10.0.0.3 ping statistics --- 00:14:30.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.345 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:30.345 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:14:30.606 00:14:30.606 --- 10.0.0.1 ping statistics --- 00:14:30.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.606 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75117 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75117 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75117 ']' 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:30.606 21:28:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:30.606 [2024-07-15 21:28:03.816564] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:30.606 [2024-07-15 21:28:03.816671] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.606 [2024-07-15 21:28:03.960499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.863 [2024-07-15 21:28:04.060068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.863 [2024-07-15 21:28:04.060139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.864 [2024-07-15 21:28:04.060150] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.864 [2024-07-15 21:28:04.060159] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.864 [2024-07-15 21:28:04.060166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.864 [2024-07-15 21:28:04.060325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.864 [2024-07-15 21:28:04.060514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.864 [2024-07-15 21:28:04.060517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.864 [2024-07-15 21:28:04.103948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:31.429 21:28:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.429 21:28:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:31.429 21:28:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.429 21:28:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.429 21:28:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:31.429 21:28:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.429 21:28:04 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:31.688 [2024-07-15 21:28:04.925094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.688 21:28:04 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:31.946 Malloc0 00:14:31.946 21:28:05 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:32.204 21:28:05 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:32.463 21:28:05 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.463 [2024-07-15 21:28:05.772911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.463 21:28:05 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:32.722 [2024-07-15 21:28:05.988831] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:32.722 21:28:06 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:32.979 [2024-07-15 21:28:06.268840] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75169 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75169 /var/tmp/bdevperf.sock 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75169 ']' 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.979 21:28:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:33.915 21:28:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.915 21:28:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:33.915 21:28:07 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:34.480 NVMe0n1 00:14:34.480 21:28:07 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:34.738 00:14:34.738 21:28:07 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75198 00:14:34.738 21:28:07 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.738 21:28:07 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:35.694 21:28:08 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.953 21:28:09 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:39.233 21:28:12 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:39.233 00:14:39.233 21:28:12 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:39.490 21:28:12 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:42.767 21:28:15 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.767 [2024-07-15 21:28:15.882175] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.767 21:28:15 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:43.704 21:28:16 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:43.962 21:28:17 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75198 00:14:50.533 0 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75169 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75169 ']' 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75169 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75169 00:14:50.533 killing process with pid 75169 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75169' 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75169 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75169 00:14:50.533 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:50.533 [2024-07-15 21:28:06.341424] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:50.533 [2024-07-15 21:28:06.341528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75169 ] 00:14:50.533 [2024-07-15 21:28:06.475487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.533 [2024-07-15 21:28:06.581141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.533 [2024-07-15 21:28:06.623643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.533 Running I/O for 15 seconds... 00:14:50.533 [2024-07-15 21:28:09.177668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.177943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.177969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.177983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.177995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.178047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.178074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.178099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.178125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.178151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.533 [2024-07-15 21:28:09.178177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.533 [2024-07-15 21:28:09.178394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.533 [2024-07-15 21:28:09.178408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.178813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.178975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.178988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.534 [2024-07-15 21:28:09.179342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.179368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.179398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.179424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.179450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.179475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.179502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.534 [2024-07-15 21:28:09.179528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.534 [2024-07-15 21:28:09.179542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.179764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.179979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.179992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.535 [2024-07-15 21:28:09.180346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.535 [2024-07-15 21:28:09.180631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.535 [2024-07-15 21:28:09.180643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:09.180669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:09.180695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:09.180725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.180981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.180993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.181019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.181044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.181074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.181100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:09.181126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a57c0 is same with the state(5) to be set 00:14:50.536 [2024-07-15 21:28:09.181154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.536 [2024-07-15 21:28:09.181164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.536 [2024-07-15 21:28:09.181173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94144 len:8 PRP1 0x0 PRP2 0x0 00:14:50.536 [2024-07-15 21:28:09.181187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181242] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22a57c0 was disconnected and freed. reset controller. 00:14:50.536 [2024-07-15 21:28:09.181257] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:50.536 [2024-07-15 21:28:09.181311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.536 [2024-07-15 21:28:09.181326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.536 [2024-07-15 21:28:09.181352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.536 [2024-07-15 21:28:09.181378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.536 [2024-07-15 21:28:09.181403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:09.181415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:50.536 [2024-07-15 21:28:09.184193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:50.536 [2024-07-15 21:28:09.184237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254570 (9): Bad file descriptor 00:14:50.536 [2024-07-15 21:28:09.219637] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:50.536 [2024-07-15 21:28:12.662726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:12.662791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.662815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:12.662862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.662878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:12.662891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.662905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:12.662918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.662933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:12.662946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.662960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.536 [2024-07-15 21:28:12.662972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.662987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.536 [2024-07-15 21:28:12.663209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.536 [2024-07-15 21:28:12.663222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.663728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.537 [2024-07-15 21:28:12.663968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.663983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.537 [2024-07-15 21:28:12.664369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.537 [2024-07-15 21:28:12.664391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.664967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.664990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.538 [2024-07-15 21:28:12.665495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.665959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.665984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.666010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.666033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.666060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.666083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.666105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.666123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.666144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.666164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.666187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.666206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.538 [2024-07-15 21:28:12.666228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.538 [2024-07-15 21:28:12.666254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.666295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.666958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.666981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.539 [2024-07-15 21:28:12.667667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.539 [2024-07-15 21:28:12.667777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.539 [2024-07-15 21:28:12.667792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:12.667804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.667831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:12.667845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.667859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:12.667872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.667929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.540 [2024-07-15 21:28:12.667947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.540 [2024-07-15 21:28:12.667958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19568 len:8 PRP1 0x0 PRP2 0x0 00:14:50.540 [2024-07-15 21:28:12.667971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.668037] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d6d30 was disconnected and freed. reset controller. 00:14:50.540 [2024-07-15 21:28:12.668055] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:50.540 [2024-07-15 21:28:12.668116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.540 [2024-07-15 21:28:12.668132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.668147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.540 [2024-07-15 21:28:12.668161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.668174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.540 [2024-07-15 21:28:12.668188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.668202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.540 [2024-07-15 21:28:12.668217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:12.668231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:50.540 [2024-07-15 21:28:12.668285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254570 (9): Bad file descriptor 00:14:50.540 [2024-07-15 21:28:12.671328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:50.540 [2024-07-15 21:28:12.704523] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:50.540 [2024-07-15 21:28:17.101996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.540 [2024-07-15 21:28:17.102282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.540 [2024-07-15 21:28:17.102920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.540 [2024-07-15 21:28:17.102935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.102948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.102962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.102975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.102989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.541 [2024-07-15 21:28:17.103612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.541 [2024-07-15 21:28:17.103881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.541 [2024-07-15 21:28:17.103895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.103909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.103922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.103942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.103955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.103975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.103988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.542 [2024-07-15 21:28:17.104406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.542 [2024-07-15 21:28:17.104883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.542 [2024-07-15 21:28:17.104897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.104911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:50.543 [2024-07-15 21:28:17.104924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.104939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.104951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.104966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.104979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.104993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.543 [2024-07-15 21:28:17.105339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d5dd0 is same with the state(5) to be set 00:14:50.543 [2024-07-15 21:28:17.105371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24752 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25192 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25200 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25208 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25224 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25232 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25240 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:8 PRP1 0x0 PRP2 0x0 00:14:50.543 [2024-07-15 21:28:17.105780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.543 [2024-07-15 21:28:17.105793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.543 [2024-07-15 21:28:17.105802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.543 [2024-07-15 21:28:17.105812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25256 len:8 PRP1 0x0 PRP2 0x0 00:14:50.544 [2024-07-15 21:28:17.105833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.544 [2024-07-15 21:28:17.105846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:50.544 [2024-07-15 21:28:17.105856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:50.544 [2024-07-15 21:28:17.105865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25264 len:8 PRP1 0x0 PRP2 0x0 00:14:50.544 [2024-07-15 21:28:17.105878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.544 [2024-07-15 21:28:17.105928] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d5dd0 was disconnected and freed. reset controller. 00:14:50.544 [2024-07-15 21:28:17.105944] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:50.544 [2024-07-15 21:28:17.105995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.544 [2024-07-15 21:28:17.106010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.544 [2024-07-15 21:28:17.106024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.544 [2024-07-15 21:28:17.106037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.544 [2024-07-15 21:28:17.106051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.544 [2024-07-15 21:28:17.106064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.544 [2024-07-15 21:28:17.106078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.544 [2024-07-15 21:28:17.106091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.544 [2024-07-15 21:28:17.106104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:50.544 [2024-07-15 21:28:17.106146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2254570 (9): Bad file descriptor 00:14:50.544 [2024-07-15 21:28:17.109080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:50.544 [2024-07-15 21:28:17.145993] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:50.544 00:14:50.544 Latency(us) 00:14:50.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.544 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:50.544 Verification LBA range: start 0x0 length 0x4000 00:14:50.544 NVMe0n1 : 15.01 11101.10 43.36 293.54 0.00 11209.56 486.91 14844.30 00:14:50.544 =================================================================================================================== 00:14:50.544 Total : 11101.10 43.36 293.54 0.00 11209.56 486.91 14844.30 00:14:50.544 Received shutdown signal, test time was about 15.000000 seconds 00:14:50.544 00:14:50.544 Latency(us) 00:14:50.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.544 =================================================================================================================== 00:14:50.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:50.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75373 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75373 /var/tmp/bdevperf.sock 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75373 ']' 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.544 21:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:51.110 21:28:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:51.110 21:28:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:14:51.110 21:28:24 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:51.110 [2024-07-15 21:28:24.412857] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:51.110 21:28:24 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:51.368 [2024-07-15 21:28:24.600862] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:51.368 21:28:24 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:51.629 NVMe0n1 00:14:51.629 21:28:24 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:51.887 00:14:51.887 21:28:25 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:52.145 00:14:52.145 21:28:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:52.145 21:28:25 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:52.402 21:28:25 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:52.661 21:28:25 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:55.946 21:28:28 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:55.946 21:28:28 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:55.946 21:28:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:55.946 21:28:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75450 00:14:55.946 21:28:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 75450 00:14:56.882 0 00:14:56.882 21:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:56.882 [2024-07-15 21:28:23.371863] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:14:56.882 [2024-07-15 21:28:23.371941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75373 ] 00:14:56.882 [2024-07-15 21:28:23.516943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.882 [2024-07-15 21:28:23.613583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.882 [2024-07-15 21:28:23.655125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:56.882 [2024-07-15 21:28:25.842618] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:56.882 [2024-07-15 21:28:25.842718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.882 [2024-07-15 21:28:25.842738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.882 [2024-07-15 21:28:25.842754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.882 [2024-07-15 21:28:25.842767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.882 [2024-07-15 21:28:25.842780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.882 [2024-07-15 21:28:25.842792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.882 [2024-07-15 21:28:25.842804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.882 [2024-07-15 21:28:25.842816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.882 [2024-07-15 21:28:25.842836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:56.882 [2024-07-15 21:28:25.842880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:56.882 [2024-07-15 21:28:25.842903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab1570 (9): Bad file descriptor 00:14:56.882 [2024-07-15 21:28:25.849399] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:56.882 Running I/O for 1 seconds... 00:14:56.882 00:14:56.882 Latency(us) 00:14:56.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.882 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.882 Verification LBA range: start 0x0 length 0x4000 00:14:56.882 NVMe0n1 : 1.01 10515.79 41.08 0.00 0.00 12105.94 1151.49 13001.92 00:14:56.882 =================================================================================================================== 00:14:56.882 Total : 10515.79 41.08 0.00 0.00 12105.94 1151.49 13001.92 00:14:56.882 21:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:56.882 21:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:57.140 21:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:57.398 21:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:57.398 21:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:57.656 21:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:57.914 21:28:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 75373 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75373 ']' 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75373 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75373 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.196 killing process with pid 75373 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75373' 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75373 00:15:01.196 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75373 00:15:01.462 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:01.462 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.722 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.722 rmmod nvme_tcp 00:15:01.723 rmmod nvme_fabrics 00:15:01.723 rmmod nvme_keyring 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75117 ']' 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75117 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75117 ']' 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75117 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75117 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75117' 00:15:01.723 killing process with pid 75117 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75117 00:15:01.723 21:28:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75117 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:01.982 00:15:01.982 real 0m32.170s 00:15:01.982 user 2m2.285s 00:15:01.982 sys 0m6.566s 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.982 21:28:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:01.982 ************************************ 00:15:01.982 END TEST nvmf_failover 00:15:01.982 ************************************ 00:15:02.241 21:28:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:02.241 21:28:35 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:02.241 21:28:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:02.241 21:28:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.241 21:28:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.241 ************************************ 00:15:02.241 START TEST nvmf_host_discovery 00:15:02.241 ************************************ 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:02.242 * Looking for test storage... 00:15:02.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:02.242 Cannot find device "nvmf_tgt_br" 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.242 Cannot find device "nvmf_tgt_br2" 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:02.242 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:02.503 Cannot find device "nvmf_tgt_br" 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:02.503 Cannot find device "nvmf_tgt_br2" 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.503 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:02.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:15:02.763 00:15:02.763 --- 10.0.0.2 ping statistics --- 00:15:02.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.763 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:02.763 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:02.763 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:15:02.763 00:15:02.763 --- 10.0.0.3 ping statistics --- 00:15:02.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.763 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:02.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:15:02.763 00:15:02.763 --- 10.0.0.1 ping statistics --- 00:15:02.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.763 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.763 21:28:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75713 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75713 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 75713 ']' 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.763 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.764 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.764 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.764 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.764 [2024-07-15 21:28:36.066384] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:02.764 [2024-07-15 21:28:36.066460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.021 [2024-07-15 21:28:36.210058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.021 [2024-07-15 21:28:36.307036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.022 [2024-07-15 21:28:36.307075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.022 [2024-07-15 21:28:36.307084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.022 [2024-07-15 21:28:36.307092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.022 [2024-07-15 21:28:36.307099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.022 [2024-07-15 21:28:36.307121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.022 [2024-07-15 21:28:36.347931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.585 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.585 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:03.585 21:28:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.585 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:03.585 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.845 [2024-07-15 21:28:36.982302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.845 [2024-07-15 21:28:36.994388] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.845 21:28:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.845 null0 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.845 null1 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75745 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75745 /tmp/host.sock 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 75745 ']' 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.845 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.845 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.845 [2024-07-15 21:28:37.090083] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:03.845 [2024-07-15 21:28:37.090154] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75745 ] 00:15:04.116 [2024-07-15 21:28:37.232187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.116 [2024-07-15 21:28:37.332790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.116 [2024-07-15 21:28:37.374633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.681 21:28:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.681 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.940 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.941 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.941 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.941 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.941 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 [2024-07-15 21:28:38.316532] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:05.199 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:15:05.200 21:28:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:15:05.765 [2024-07-15 21:28:38.981776] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:05.765 [2024-07-15 21:28:38.981824] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:05.765 [2024-07-15 21:28:38.981843] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:05.765 [2024-07-15 21:28:38.987798] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:05.765 [2024-07-15 21:28:39.044680] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:05.765 [2024-07-15 21:28:39.044723] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:06.332 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.592 [2024-07-15 21:28:39.855283] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:06.592 [2024-07-15 21:28:39.855904] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:06.592 [2024-07-15 21:28:39.855936] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.592 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:06.593 [2024-07-15 21:28:39.861879] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.593 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.593 [2024-07-15 21:28:39.924004] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:06.593 [2024-07-15 21:28:39.924030] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:06.593 [2024-07-15 21:28:39.924037] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:06.852 21:28:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.852 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.852 [2024-07-15 21:28:40.063911] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:06.852 [2024-07-15 21:28:40.063945] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:06.853 [2024-07-15 21:28:40.069892] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:06.853 [2024-07-15 21:28:40.069928] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:06.853 [2024-07-15 21:28:40.070022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.853 [2024-07-15 21:28:40.070049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.853 [2024-07-15 21:28:40.070061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.853 [2024-07-15 21:28:40.070071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.853 [2024-07-15 21:28:40.070081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.853 [2024-07-15 21:28:40.070090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.853 [2024-07-15 21:28:40.070100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.853 [2024-07-15 21:28:40.070109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.853 [2024-07-15 21:28:40.070119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd5600 is same with the state(5) to be set 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.853 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:07.112 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.113 21:28:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 [2024-07-15 21:28:41.478561] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:08.491 [2024-07-15 21:28:41.478606] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:08.491 [2024-07-15 21:28:41.478624] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:08.491 [2024-07-15 21:28:41.484617] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:08.491 [2024-07-15 21:28:41.544942] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:08.491 [2024-07-15 21:28:41.545007] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 request: 00:15:08.491 { 00:15:08.491 "name": "nvme", 00:15:08.491 "trtype": "tcp", 00:15:08.491 "traddr": "10.0.0.2", 00:15:08.491 "adrfam": "ipv4", 00:15:08.491 "trsvcid": "8009", 00:15:08.491 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:08.491 "wait_for_attach": true, 00:15:08.491 "method": "bdev_nvme_start_discovery", 00:15:08.491 "req_id": 1 00:15:08.491 } 00:15:08.491 Got JSON-RPC error response 00:15:08.491 response: 00:15:08.491 { 00:15:08.491 "code": -17, 00:15:08.491 "message": "File exists" 00:15:08.491 } 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 request: 00:15:08.491 { 00:15:08.491 "name": "nvme_second", 00:15:08.491 "trtype": "tcp", 00:15:08.491 "traddr": "10.0.0.2", 00:15:08.491 "adrfam": "ipv4", 00:15:08.491 "trsvcid": "8009", 00:15:08.491 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:08.491 "wait_for_attach": true, 00:15:08.491 "method": "bdev_nvme_start_discovery", 00:15:08.491 "req_id": 1 00:15:08.491 } 00:15:08.491 Got JSON-RPC error response 00:15:08.491 response: 00:15:08.491 { 00:15:08.491 "code": -17, 00:15:08.491 "message": "File exists" 00:15:08.491 } 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.491 21:28:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.870 [2024-07-15 21:28:42.807768] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:09.870 [2024-07-15 21:28:42.807849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde0170 with addr=10.0.0.2, port=8010 00:15:09.870 [2024-07-15 21:28:42.807872] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:09.870 [2024-07-15 21:28:42.807882] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:09.870 [2024-07-15 21:28:42.807891] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:10.799 [2024-07-15 21:28:43.806146] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:10.799 [2024-07-15 21:28:43.806221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde0170 with addr=10.0.0.2, port=8010 00:15:10.799 [2024-07-15 21:28:43.806243] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:10.799 [2024-07-15 21:28:43.806253] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:10.799 [2024-07-15 21:28:43.806261] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:11.731 [2024-07-15 21:28:44.804383] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:11.731 request: 00:15:11.731 { 00:15:11.731 "name": "nvme_second", 00:15:11.731 "trtype": "tcp", 00:15:11.731 "traddr": "10.0.0.2", 00:15:11.731 "adrfam": "ipv4", 00:15:11.731 "trsvcid": "8010", 00:15:11.731 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:11.731 "wait_for_attach": false, 00:15:11.731 "attach_timeout_ms": 3000, 00:15:11.731 "method": "bdev_nvme_start_discovery", 00:15:11.731 "req_id": 1 00:15:11.731 } 00:15:11.731 Got JSON-RPC error response 00:15:11.731 response: 00:15:11.731 { 00:15:11.731 "code": -110, 00:15:11.731 "message": "Connection timed out" 00:15:11.731 } 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75745 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.731 21:28:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.731 rmmod nvme_tcp 00:15:11.731 rmmod nvme_fabrics 00:15:11.731 rmmod nvme_keyring 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75713 ']' 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75713 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 75713 ']' 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 75713 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75713 00:15:11.731 killing process with pid 75713 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75713' 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 75713 00:15:11.731 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 75713 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:11.990 00:15:11.990 real 0m9.911s 00:15:11.990 user 0m18.230s 00:15:11.990 sys 0m2.551s 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.990 21:28:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.990 ************************************ 00:15:11.990 END TEST nvmf_host_discovery 00:15:11.990 ************************************ 00:15:12.249 21:28:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:12.249 21:28:45 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:12.249 21:28:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:12.249 21:28:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.249 21:28:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:12.249 ************************************ 00:15:12.249 START TEST nvmf_host_multipath_status 00:15:12.249 ************************************ 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:12.249 * Looking for test storage... 00:15:12.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.249 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:12.250 Cannot find device "nvmf_tgt_br" 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.250 Cannot find device "nvmf_tgt_br2" 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:12.250 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:12.509 Cannot find device "nvmf_tgt_br" 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:12.509 Cannot find device "nvmf_tgt_br2" 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:12.509 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:12.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:12.768 00:15:12.768 --- 10.0.0.2 ping statistics --- 00:15:12.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.768 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:12.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:12.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:12.768 00:15:12.768 --- 10.0.0.3 ping statistics --- 00:15:12.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.768 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:12.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:15:12.768 00:15:12.768 --- 10.0.0.1 ping statistics --- 00:15:12.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.768 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76201 00:15:12.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76201 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76201 ']' 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:12.768 21:28:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:12.768 [2024-07-15 21:28:46.053554] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:12.768 [2024-07-15 21:28:46.053638] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.026 [2024-07-15 21:28:46.198536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:13.026 [2024-07-15 21:28:46.295241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.026 [2024-07-15 21:28:46.295293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.026 [2024-07-15 21:28:46.295303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.026 [2024-07-15 21:28:46.295311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.026 [2024-07-15 21:28:46.295318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.026 [2024-07-15 21:28:46.295545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.026 [2024-07-15 21:28:46.295439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.026 [2024-07-15 21:28:46.336550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:13.593 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.593 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:13.593 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.593 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:13.593 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:13.857 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.857 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76201 00:15:13.857 21:28:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:13.857 [2024-07-15 21:28:47.180471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.857 21:28:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:14.128 Malloc0 00:15:14.128 21:28:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:14.405 21:28:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.664 21:28:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.664 [2024-07-15 21:28:47.960092] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.664 21:28:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:14.923 [2024-07-15 21:28:48.175986] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76251 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76251 /var/tmp/bdevperf.sock 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 76251 ']' 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:14.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:14.923 21:28:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:15.858 21:28:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.858 21:28:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:15.858 21:28:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:16.116 21:28:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:16.375 Nvme0n1 00:15:16.375 21:28:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:16.633 Nvme0n1 00:15:16.633 21:28:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:16.633 21:28:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:18.537 21:28:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:18.537 21:28:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:18.796 21:28:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:19.054 21:28:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:19.991 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:19.991 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:19.991 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.991 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:20.265 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.265 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:20.265 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.265 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:20.522 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:20.522 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:20.522 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.522 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:20.779 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.779 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:20.779 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.779 21:28:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.036 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:21.293 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.293 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:21.293 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:21.573 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:21.832 21:28:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:22.767 21:28:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:22.767 21:28:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:22.767 21:28:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.767 21:28:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.026 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:23.285 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.285 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:23.285 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:23.285 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.544 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.545 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:23.545 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.545 21:28:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:23.804 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.804 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:23.804 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.804 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:24.063 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.063 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:24.063 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:24.063 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:24.322 21:28:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:25.260 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:25.260 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:25.260 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.260 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:25.519 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.519 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:25.519 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.519 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:25.778 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:25.779 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:25.779 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:25.779 21:28:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.037 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:26.296 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.296 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:26.296 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.296 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:26.554 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.554 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:26.554 21:28:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:26.812 21:29:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:27.071 21:29:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:28.008 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:28.008 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:28.008 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.008 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:28.267 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.267 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:28.267 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.267 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:28.525 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:28.525 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:28.525 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:28.525 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.784 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.784 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:28.784 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.784 21:29:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.042 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:29.300 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:29.300 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:29.300 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:29.557 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:29.815 21:29:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:30.750 21:29:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:30.750 21:29:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:30.750 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:30.750 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:31.007 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.007 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:31.007 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:31.007 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.265 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:31.523 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:31.523 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:31.523 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.523 21:29:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:31.780 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.780 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:31.780 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.780 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:32.037 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:32.037 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:32.037 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:32.294 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:32.294 21:29:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:33.667 21:29:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.924 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:34.180 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:34.181 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:34.181 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.181 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:34.437 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:34.437 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:34.437 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.437 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:34.694 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:34.694 21:29:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:34.951 21:29:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:34.951 21:29:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:34.951 21:29:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:35.207 21:29:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.575 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:36.830 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.830 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:36.830 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.830 21:29:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.086 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:37.343 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.343 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:37.343 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.343 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:37.601 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.601 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:37.601 21:29:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:37.857 21:29:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:37.857 21:29:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:39.245 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.503 21:29:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:39.762 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.762 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:39.762 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.762 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:40.019 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.019 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:40.019 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.019 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:40.277 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.277 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:40.277 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:40.277 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:40.535 21:29:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:41.941 21:29:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:41.941 21:29:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:41.941 21:29:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.941 21:29:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.941 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:42.205 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.205 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:42.205 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:42.205 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.462 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.462 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:42.462 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.462 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:42.719 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.719 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:42.719 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.719 21:29:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:42.976 21:29:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.976 21:29:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:42.976 21:29:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:42.976 21:29:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:43.235 21:29:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:44.167 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:44.167 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:44.167 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.167 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:44.424 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.424 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:44.424 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:44.424 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.789 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:44.789 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:44.789 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:44.789 21:29:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:45.076 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.333 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.333 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:45.333 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.333 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76251 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76251 ']' 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76251 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76251 00:15:45.590 killing process with pid 76251 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76251' 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76251 00:15:45.590 21:29:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76251 00:15:45.849 Connection closed with partial response: 00:15:45.849 00:15:45.849 00:15:45.849 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76251 00:15:45.849 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.849 [2024-07-15 21:28:48.246280] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:45.849 [2024-07-15 21:28:48.246433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76251 ] 00:15:45.849 [2024-07-15 21:28:48.388973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.849 [2024-07-15 21:28:48.508442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.849 [2024-07-15 21:28:48.549826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:45.849 Running I/O for 90 seconds... 00:15:45.849 [2024-07-15 21:29:02.767859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.767942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.767991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.768006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.768025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.768038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.768056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.768068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.768086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.768098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.768116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.768128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.768146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.768158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.768176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.849 [2024-07-15 21:29:02.768188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:45.849 [2024-07-15 21:29:02.768206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.849 [2024-07-15 21:29:02.768218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.768249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.768297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.768328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.768358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.768388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.768418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.768448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.768986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.768999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.769031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.769063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.769095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.850 [2024-07-15 21:29:02.769127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:45.850 [2024-07-15 21:29:02.769656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.850 [2024-07-15 21:29:02.769669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.769942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.769974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.769992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.851 [2024-07-15 21:29:02.770451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:45.851 [2024-07-15 21:29:02.770966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.851 [2024-07-15 21:29:02.770979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.770999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.771228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.771755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.771769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.852 [2024-07-15 21:29:02.772354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:02.772870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:02.772884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:16.494986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:16.495061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:16.495113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:16.495128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:45.852 [2024-07-15 21:29:16.495148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.852 [2024-07-15 21:29:16.495162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.495886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.495972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.495985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.496018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.496186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.496219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.496259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.496292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.853 [2024-07-15 21:29:16.496422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.496455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.853 [2024-07-15 21:29:16.496488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:45.853 [2024-07-15 21:29:16.496507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.496521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.496554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.496758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.496792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.496834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.496867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.496984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.496998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.497978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.498162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.498195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.498228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.854 [2024-07-15 21:29:16.498261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:45.854 [2024-07-15 21:29:16.498391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.854 [2024-07-15 21:29:16.498405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:45.854 Received shutdown signal, test time was about 29.007130 seconds 00:15:45.854 00:15:45.854 Latency(us) 00:15:45.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.854 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:45.854 Verification LBA range: start 0x0 length 0x4000 00:15:45.854 Nvme0n1 : 29.01 10938.02 42.73 0.00 0.00 11679.94 162.03 3018551.31 00:15:45.854 =================================================================================================================== 00:15:45.854 Total : 10938.02 42.73 0.00 0.00 11679.94 162.03 3018551.31 00:15:45.854 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.112 rmmod nvme_tcp 00:15:46.112 rmmod nvme_fabrics 00:15:46.112 rmmod nvme_keyring 00:15:46.112 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76201 ']' 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76201 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 76201 ']' 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 76201 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76201 00:15:46.369 killing process with pid 76201 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76201' 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 76201 00:15:46.369 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 76201 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.626 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:46.626 ************************************ 00:15:46.626 END TEST nvmf_host_multipath_status 00:15:46.627 ************************************ 00:15:46.627 00:15:46.627 real 0m34.427s 00:15:46.627 user 1m46.278s 00:15:46.627 sys 0m12.781s 00:15:46.627 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.627 21:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:46.627 21:29:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:46.627 21:29:19 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:46.627 21:29:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:46.627 21:29:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.627 21:29:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.627 ************************************ 00:15:46.627 START TEST nvmf_discovery_remove_ifc 00:15:46.627 ************************************ 00:15:46.627 21:29:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:46.883 * Looking for test storage... 00:15:46.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:46.883 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:46.884 Cannot find device "nvmf_tgt_br" 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.884 Cannot find device "nvmf_tgt_br2" 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:46.884 Cannot find device "nvmf_tgt_br" 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:46.884 Cannot find device "nvmf_tgt_br2" 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:46.884 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:47.141 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:47.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:47.142 00:15:47.142 --- 10.0.0.2 ping statistics --- 00:15:47.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.142 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:47.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:47.142 00:15:47.142 --- 10.0.0.3 ping statistics --- 00:15:47.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.142 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:47.142 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:15:47.142 00:15:47.142 --- 10.0.0.1 ping statistics --- 00:15:47.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.142 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76990 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76990 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 76990 ']' 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.400 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.401 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.401 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.401 21:29:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:47.401 [2024-07-15 21:29:20.598364] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:47.401 [2024-07-15 21:29:20.599038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.401 [2024-07-15 21:29:20.743541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.659 [2024-07-15 21:29:20.843088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.659 [2024-07-15 21:29:20.843270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.659 [2024-07-15 21:29:20.843444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.659 [2024-07-15 21:29:20.843639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.659 [2024-07-15 21:29:20.843669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.659 [2024-07-15 21:29:20.843720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.659 [2024-07-15 21:29:20.885870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.226 [2024-07-15 21:29:21.525345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.226 [2024-07-15 21:29:21.533451] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:48.226 null0 00:15:48.226 [2024-07-15 21:29:21.565365] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.226 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77022 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77022 /tmp/host.sock 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77022 ']' 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.226 21:29:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.501 [2024-07-15 21:29:21.637076] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:15:48.501 [2024-07-15 21:29:21.637522] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77022 ] 00:15:48.501 [2024-07-15 21:29:21.775480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.760 [2024-07-15 21:29:21.876247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.328 [2024-07-15 21:29:22.593215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.328 21:29:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.704 [2024-07-15 21:29:23.634918] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:50.704 [2024-07-15 21:29:23.634956] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:50.704 [2024-07-15 21:29:23.634968] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:50.704 [2024-07-15 21:29:23.640946] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:50.704 [2024-07-15 21:29:23.697797] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:50.704 [2024-07-15 21:29:23.697879] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:50.704 [2024-07-15 21:29:23.697904] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:50.704 [2024-07-15 21:29:23.697922] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:50.704 [2024-07-15 21:29:23.697963] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.704 [2024-07-15 21:29:23.703420] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xffade0 was disconnected and freed. delete nvme_qpair. 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:50.704 21:29:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:51.639 21:29:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:52.573 21:29:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:53.944 21:29:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:54.878 21:29:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:54.878 21:29:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.878 21:29:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:54.878 21:29:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.878 21:29:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:54.878 21:29:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:54.878 21:29:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:54.878 21:29:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.878 21:29:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:54.878 21:29:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:55.811 21:29:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:55.811 [2024-07-15 21:29:29.127278] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:55.811 [2024-07-15 21:29:29.127369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.811 [2024-07-15 21:29:29.127384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.811 [2024-07-15 21:29:29.127398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.811 [2024-07-15 21:29:29.127408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.811 [2024-07-15 21:29:29.127418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.811 [2024-07-15 21:29:29.127428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.811 [2024-07-15 21:29:29.127438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.811 [2024-07-15 21:29:29.127447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.811 [2024-07-15 21:29:29.127457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.811 [2024-07-15 21:29:29.127466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.811 [2024-07-15 21:29:29.127475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60ac0 is same with the state(5) to be set 00:15:55.811 [2024-07-15 21:29:29.137254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf60ac0 (9): Bad file descriptor 00:15:55.811 [2024-07-15 21:29:29.147261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:56.748 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:56.748 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:56.748 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.748 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.748 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:56.748 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:56.748 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:57.008 [2024-07-15 21:29:30.205904] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:57.008 [2024-07-15 21:29:30.206065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf60ac0 with addr=10.0.0.2, port=4420 00:15:57.008 [2024-07-15 21:29:30.206113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf60ac0 is same with the state(5) to be set 00:15:57.008 [2024-07-15 21:29:30.206201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf60ac0 (9): Bad file descriptor 00:15:57.008 [2024-07-15 21:29:30.207248] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:57.008 [2024-07-15 21:29:30.207307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:57.008 [2024-07-15 21:29:30.207336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:57.008 [2024-07-15 21:29:30.207367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:57.008 [2024-07-15 21:29:30.207438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:57.008 [2024-07-15 21:29:30.207468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:57.008 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.008 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:57.008 21:29:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:57.945 [2024-07-15 21:29:31.205925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:57.945 [2024-07-15 21:29:31.205978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:57.945 [2024-07-15 21:29:31.205990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:57.945 [2024-07-15 21:29:31.206000] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:57.945 [2024-07-15 21:29:31.206020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:57.945 [2024-07-15 21:29:31.206044] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:57.945 [2024-07-15 21:29:31.206091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.946 [2024-07-15 21:29:31.206104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.946 [2024-07-15 21:29:31.206117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.946 [2024-07-15 21:29:31.206126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.946 [2024-07-15 21:29:31.206136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.946 [2024-07-15 21:29:31.206145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.946 [2024-07-15 21:29:31.206155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.946 [2024-07-15 21:29:31.206164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.946 [2024-07-15 21:29:31.206174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.946 [2024-07-15 21:29:31.206183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.946 [2024-07-15 21:29:31.206192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:57.946 [2024-07-15 21:29:31.206835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf64860 (9): Bad file descriptor 00:15:57.946 [2024-07-15 21:29:31.207842] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:57.946 [2024-07-15 21:29:31.207859] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.946 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:58.205 21:29:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:59.141 21:29:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:00.075 [2024-07-15 21:29:33.213568] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:00.075 [2024-07-15 21:29:33.213612] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:00.075 [2024-07-15 21:29:33.213629] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:00.075 [2024-07-15 21:29:33.219593] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:00.075 [2024-07-15 21:29:33.275648] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:00.075 [2024-07-15 21:29:33.275943] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:00.075 [2024-07-15 21:29:33.276001] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:00.075 [2024-07-15 21:29:33.276097] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:00.075 [2024-07-15 21:29:33.276150] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:00.075 [2024-07-15 21:29:33.282308] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1007d90 was disconnected and freed. delete nvme_qpair. 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77022 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77022 ']' 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77022 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77022 00:16:00.333 killing process with pid 77022 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77022' 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77022 00:16:00.333 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77022 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.592 rmmod nvme_tcp 00:16:00.592 rmmod nvme_fabrics 00:16:00.592 rmmod nvme_keyring 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76990 ']' 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76990 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 76990 ']' 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 76990 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76990 00:16:00.592 killing process with pid 76990 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76990' 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 76990 00:16:00.592 21:29:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 76990 00:16:00.849 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:00.850 00:16:00.850 real 0m14.253s 00:16:00.850 user 0m23.669s 00:16:00.850 sys 0m3.260s 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.850 ************************************ 00:16:00.850 END TEST nvmf_discovery_remove_ifc 00:16:00.850 21:29:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:00.850 ************************************ 00:16:00.850 21:29:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:00.850 21:29:34 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:00.850 21:29:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:00.850 21:29:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.850 21:29:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.850 ************************************ 00:16:00.850 START TEST nvmf_identify_kernel_target 00:16:00.850 ************************************ 00:16:00.850 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:01.108 * Looking for test storage... 00:16:01.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:01.108 Cannot find device "nvmf_tgt_br" 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.108 Cannot find device "nvmf_tgt_br2" 00:16:01.108 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:01.109 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:01.109 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:01.109 Cannot find device "nvmf_tgt_br" 00:16:01.109 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:01.109 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:01.109 Cannot find device "nvmf_tgt_br2" 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.366 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.367 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:01.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:16:01.625 00:16:01.625 --- 10.0.0.2 ping statistics --- 00:16:01.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.625 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:01.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:16:01.625 00:16:01.625 --- 10.0.0.3 ping statistics --- 00:16:01.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.625 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:16:01.625 00:16:01.625 --- 10.0.0.1 ping statistics --- 00:16:01.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.625 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:01.625 21:29:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:01.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:02.141 Waiting for block devices as requested 00:16:02.141 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.141 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:02.398 No valid GPT data, bailing 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:02.398 No valid GPT data, bailing 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.398 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:02.399 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:02.399 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:02.399 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:02.399 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.399 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:02.399 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:02.399 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:02.676 No valid GPT data, bailing 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:02.676 No valid GPT data, bailing 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -a 10.0.0.1 -t tcp -s 4420 00:16:02.676 00:16:02.676 Discovery Log Number of Records 2, Generation counter 2 00:16:02.676 =====Discovery Log Entry 0====== 00:16:02.676 trtype: tcp 00:16:02.676 adrfam: ipv4 00:16:02.676 subtype: current discovery subsystem 00:16:02.676 treq: not specified, sq flow control disable supported 00:16:02.676 portid: 1 00:16:02.676 trsvcid: 4420 00:16:02.676 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:02.676 traddr: 10.0.0.1 00:16:02.676 eflags: none 00:16:02.676 sectype: none 00:16:02.676 =====Discovery Log Entry 1====== 00:16:02.676 trtype: tcp 00:16:02.676 adrfam: ipv4 00:16:02.676 subtype: nvme subsystem 00:16:02.676 treq: not specified, sq flow control disable supported 00:16:02.676 portid: 1 00:16:02.676 trsvcid: 4420 00:16:02.676 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:02.676 traddr: 10.0.0.1 00:16:02.676 eflags: none 00:16:02.676 sectype: none 00:16:02.676 21:29:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:02.676 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:02.935 ===================================================== 00:16:02.935 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:02.935 ===================================================== 00:16:02.935 Controller Capabilities/Features 00:16:02.935 ================================ 00:16:02.935 Vendor ID: 0000 00:16:02.935 Subsystem Vendor ID: 0000 00:16:02.935 Serial Number: f5b5c6296a117b7137c7 00:16:02.935 Model Number: Linux 00:16:02.935 Firmware Version: 6.7.0-68 00:16:02.935 Recommended Arb Burst: 0 00:16:02.935 IEEE OUI Identifier: 00 00 00 00:16:02.935 Multi-path I/O 00:16:02.935 May have multiple subsystem ports: No 00:16:02.935 May have multiple controllers: No 00:16:02.935 Associated with SR-IOV VF: No 00:16:02.935 Max Data Transfer Size: Unlimited 00:16:02.935 Max Number of Namespaces: 0 00:16:02.935 Max Number of I/O Queues: 1024 00:16:02.935 NVMe Specification Version (VS): 1.3 00:16:02.935 NVMe Specification Version (Identify): 1.3 00:16:02.935 Maximum Queue Entries: 1024 00:16:02.935 Contiguous Queues Required: No 00:16:02.935 Arbitration Mechanisms Supported 00:16:02.935 Weighted Round Robin: Not Supported 00:16:02.935 Vendor Specific: Not Supported 00:16:02.935 Reset Timeout: 7500 ms 00:16:02.935 Doorbell Stride: 4 bytes 00:16:02.935 NVM Subsystem Reset: Not Supported 00:16:02.935 Command Sets Supported 00:16:02.935 NVM Command Set: Supported 00:16:02.935 Boot Partition: Not Supported 00:16:02.935 Memory Page Size Minimum: 4096 bytes 00:16:02.935 Memory Page Size Maximum: 4096 bytes 00:16:02.935 Persistent Memory Region: Not Supported 00:16:02.935 Optional Asynchronous Events Supported 00:16:02.935 Namespace Attribute Notices: Not Supported 00:16:02.935 Firmware Activation Notices: Not Supported 00:16:02.935 ANA Change Notices: Not Supported 00:16:02.935 PLE Aggregate Log Change Notices: Not Supported 00:16:02.935 LBA Status Info Alert Notices: Not Supported 00:16:02.935 EGE Aggregate Log Change Notices: Not Supported 00:16:02.935 Normal NVM Subsystem Shutdown event: Not Supported 00:16:02.935 Zone Descriptor Change Notices: Not Supported 00:16:02.935 Discovery Log Change Notices: Supported 00:16:02.935 Controller Attributes 00:16:02.935 128-bit Host Identifier: Not Supported 00:16:02.935 Non-Operational Permissive Mode: Not Supported 00:16:02.935 NVM Sets: Not Supported 00:16:02.935 Read Recovery Levels: Not Supported 00:16:02.935 Endurance Groups: Not Supported 00:16:02.935 Predictable Latency Mode: Not Supported 00:16:02.935 Traffic Based Keep ALive: Not Supported 00:16:02.935 Namespace Granularity: Not Supported 00:16:02.935 SQ Associations: Not Supported 00:16:02.935 UUID List: Not Supported 00:16:02.935 Multi-Domain Subsystem: Not Supported 00:16:02.935 Fixed Capacity Management: Not Supported 00:16:02.935 Variable Capacity Management: Not Supported 00:16:02.935 Delete Endurance Group: Not Supported 00:16:02.935 Delete NVM Set: Not Supported 00:16:02.935 Extended LBA Formats Supported: Not Supported 00:16:02.935 Flexible Data Placement Supported: Not Supported 00:16:02.935 00:16:02.935 Controller Memory Buffer Support 00:16:02.935 ================================ 00:16:02.935 Supported: No 00:16:02.935 00:16:02.935 Persistent Memory Region Support 00:16:02.935 ================================ 00:16:02.935 Supported: No 00:16:02.935 00:16:02.935 Admin Command Set Attributes 00:16:02.935 ============================ 00:16:02.935 Security Send/Receive: Not Supported 00:16:02.935 Format NVM: Not Supported 00:16:02.935 Firmware Activate/Download: Not Supported 00:16:02.935 Namespace Management: Not Supported 00:16:02.935 Device Self-Test: Not Supported 00:16:02.935 Directives: Not Supported 00:16:02.935 NVMe-MI: Not Supported 00:16:02.935 Virtualization Management: Not Supported 00:16:02.935 Doorbell Buffer Config: Not Supported 00:16:02.935 Get LBA Status Capability: Not Supported 00:16:02.935 Command & Feature Lockdown Capability: Not Supported 00:16:02.935 Abort Command Limit: 1 00:16:02.935 Async Event Request Limit: 1 00:16:02.935 Number of Firmware Slots: N/A 00:16:02.935 Firmware Slot 1 Read-Only: N/A 00:16:02.935 Firmware Activation Without Reset: N/A 00:16:02.935 Multiple Update Detection Support: N/A 00:16:02.935 Firmware Update Granularity: No Information Provided 00:16:02.935 Per-Namespace SMART Log: No 00:16:02.935 Asymmetric Namespace Access Log Page: Not Supported 00:16:02.935 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:02.935 Command Effects Log Page: Not Supported 00:16:02.935 Get Log Page Extended Data: Supported 00:16:02.935 Telemetry Log Pages: Not Supported 00:16:02.935 Persistent Event Log Pages: Not Supported 00:16:02.935 Supported Log Pages Log Page: May Support 00:16:02.935 Commands Supported & Effects Log Page: Not Supported 00:16:02.935 Feature Identifiers & Effects Log Page:May Support 00:16:02.935 NVMe-MI Commands & Effects Log Page: May Support 00:16:02.935 Data Area 4 for Telemetry Log: Not Supported 00:16:02.935 Error Log Page Entries Supported: 1 00:16:02.935 Keep Alive: Not Supported 00:16:02.935 00:16:02.935 NVM Command Set Attributes 00:16:02.935 ========================== 00:16:02.935 Submission Queue Entry Size 00:16:02.935 Max: 1 00:16:02.935 Min: 1 00:16:02.935 Completion Queue Entry Size 00:16:02.935 Max: 1 00:16:02.935 Min: 1 00:16:02.935 Number of Namespaces: 0 00:16:02.935 Compare Command: Not Supported 00:16:02.935 Write Uncorrectable Command: Not Supported 00:16:02.935 Dataset Management Command: Not Supported 00:16:02.935 Write Zeroes Command: Not Supported 00:16:02.935 Set Features Save Field: Not Supported 00:16:02.935 Reservations: Not Supported 00:16:02.935 Timestamp: Not Supported 00:16:02.935 Copy: Not Supported 00:16:02.935 Volatile Write Cache: Not Present 00:16:02.935 Atomic Write Unit (Normal): 1 00:16:02.935 Atomic Write Unit (PFail): 1 00:16:02.935 Atomic Compare & Write Unit: 1 00:16:02.935 Fused Compare & Write: Not Supported 00:16:02.935 Scatter-Gather List 00:16:02.935 SGL Command Set: Supported 00:16:02.935 SGL Keyed: Not Supported 00:16:02.935 SGL Bit Bucket Descriptor: Not Supported 00:16:02.935 SGL Metadata Pointer: Not Supported 00:16:02.935 Oversized SGL: Not Supported 00:16:02.935 SGL Metadata Address: Not Supported 00:16:02.935 SGL Offset: Supported 00:16:02.935 Transport SGL Data Block: Not Supported 00:16:02.935 Replay Protected Memory Block: Not Supported 00:16:02.935 00:16:02.935 Firmware Slot Information 00:16:02.935 ========================= 00:16:02.935 Active slot: 0 00:16:02.935 00:16:02.935 00:16:02.935 Error Log 00:16:02.935 ========= 00:16:02.935 00:16:02.935 Active Namespaces 00:16:02.935 ================= 00:16:02.935 Discovery Log Page 00:16:02.935 ================== 00:16:02.935 Generation Counter: 2 00:16:02.935 Number of Records: 2 00:16:02.935 Record Format: 0 00:16:02.935 00:16:02.935 Discovery Log Entry 0 00:16:02.935 ---------------------- 00:16:02.935 Transport Type: 3 (TCP) 00:16:02.935 Address Family: 1 (IPv4) 00:16:02.935 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:02.935 Entry Flags: 00:16:02.935 Duplicate Returned Information: 0 00:16:02.935 Explicit Persistent Connection Support for Discovery: 0 00:16:02.935 Transport Requirements: 00:16:02.935 Secure Channel: Not Specified 00:16:02.935 Port ID: 1 (0x0001) 00:16:02.935 Controller ID: 65535 (0xffff) 00:16:02.935 Admin Max SQ Size: 32 00:16:02.935 Transport Service Identifier: 4420 00:16:02.935 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:02.935 Transport Address: 10.0.0.1 00:16:02.935 Discovery Log Entry 1 00:16:02.935 ---------------------- 00:16:02.935 Transport Type: 3 (TCP) 00:16:02.935 Address Family: 1 (IPv4) 00:16:02.935 Subsystem Type: 2 (NVM Subsystem) 00:16:02.935 Entry Flags: 00:16:02.935 Duplicate Returned Information: 0 00:16:02.935 Explicit Persistent Connection Support for Discovery: 0 00:16:02.935 Transport Requirements: 00:16:02.935 Secure Channel: Not Specified 00:16:02.935 Port ID: 1 (0x0001) 00:16:02.935 Controller ID: 65535 (0xffff) 00:16:02.935 Admin Max SQ Size: 32 00:16:02.935 Transport Service Identifier: 4420 00:16:02.936 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:02.936 Transport Address: 10.0.0.1 00:16:02.936 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:03.194 get_feature(0x01) failed 00:16:03.194 get_feature(0x02) failed 00:16:03.194 get_feature(0x04) failed 00:16:03.194 ===================================================== 00:16:03.194 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:03.194 ===================================================== 00:16:03.194 Controller Capabilities/Features 00:16:03.194 ================================ 00:16:03.194 Vendor ID: 0000 00:16:03.194 Subsystem Vendor ID: 0000 00:16:03.194 Serial Number: 9b90586a54cf29735e86 00:16:03.194 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:03.194 Firmware Version: 6.7.0-68 00:16:03.194 Recommended Arb Burst: 6 00:16:03.194 IEEE OUI Identifier: 00 00 00 00:16:03.194 Multi-path I/O 00:16:03.194 May have multiple subsystem ports: Yes 00:16:03.194 May have multiple controllers: Yes 00:16:03.194 Associated with SR-IOV VF: No 00:16:03.194 Max Data Transfer Size: Unlimited 00:16:03.194 Max Number of Namespaces: 1024 00:16:03.194 Max Number of I/O Queues: 128 00:16:03.194 NVMe Specification Version (VS): 1.3 00:16:03.194 NVMe Specification Version (Identify): 1.3 00:16:03.194 Maximum Queue Entries: 1024 00:16:03.194 Contiguous Queues Required: No 00:16:03.194 Arbitration Mechanisms Supported 00:16:03.194 Weighted Round Robin: Not Supported 00:16:03.194 Vendor Specific: Not Supported 00:16:03.194 Reset Timeout: 7500 ms 00:16:03.194 Doorbell Stride: 4 bytes 00:16:03.194 NVM Subsystem Reset: Not Supported 00:16:03.194 Command Sets Supported 00:16:03.194 NVM Command Set: Supported 00:16:03.194 Boot Partition: Not Supported 00:16:03.194 Memory Page Size Minimum: 4096 bytes 00:16:03.194 Memory Page Size Maximum: 4096 bytes 00:16:03.194 Persistent Memory Region: Not Supported 00:16:03.194 Optional Asynchronous Events Supported 00:16:03.194 Namespace Attribute Notices: Supported 00:16:03.194 Firmware Activation Notices: Not Supported 00:16:03.194 ANA Change Notices: Supported 00:16:03.194 PLE Aggregate Log Change Notices: Not Supported 00:16:03.194 LBA Status Info Alert Notices: Not Supported 00:16:03.194 EGE Aggregate Log Change Notices: Not Supported 00:16:03.194 Normal NVM Subsystem Shutdown event: Not Supported 00:16:03.194 Zone Descriptor Change Notices: Not Supported 00:16:03.194 Discovery Log Change Notices: Not Supported 00:16:03.194 Controller Attributes 00:16:03.194 128-bit Host Identifier: Supported 00:16:03.194 Non-Operational Permissive Mode: Not Supported 00:16:03.194 NVM Sets: Not Supported 00:16:03.194 Read Recovery Levels: Not Supported 00:16:03.194 Endurance Groups: Not Supported 00:16:03.194 Predictable Latency Mode: Not Supported 00:16:03.194 Traffic Based Keep ALive: Supported 00:16:03.194 Namespace Granularity: Not Supported 00:16:03.194 SQ Associations: Not Supported 00:16:03.194 UUID List: Not Supported 00:16:03.194 Multi-Domain Subsystem: Not Supported 00:16:03.194 Fixed Capacity Management: Not Supported 00:16:03.194 Variable Capacity Management: Not Supported 00:16:03.194 Delete Endurance Group: Not Supported 00:16:03.194 Delete NVM Set: Not Supported 00:16:03.194 Extended LBA Formats Supported: Not Supported 00:16:03.194 Flexible Data Placement Supported: Not Supported 00:16:03.194 00:16:03.194 Controller Memory Buffer Support 00:16:03.194 ================================ 00:16:03.194 Supported: No 00:16:03.194 00:16:03.194 Persistent Memory Region Support 00:16:03.194 ================================ 00:16:03.194 Supported: No 00:16:03.194 00:16:03.194 Admin Command Set Attributes 00:16:03.194 ============================ 00:16:03.194 Security Send/Receive: Not Supported 00:16:03.194 Format NVM: Not Supported 00:16:03.194 Firmware Activate/Download: Not Supported 00:16:03.194 Namespace Management: Not Supported 00:16:03.194 Device Self-Test: Not Supported 00:16:03.194 Directives: Not Supported 00:16:03.194 NVMe-MI: Not Supported 00:16:03.194 Virtualization Management: Not Supported 00:16:03.194 Doorbell Buffer Config: Not Supported 00:16:03.194 Get LBA Status Capability: Not Supported 00:16:03.194 Command & Feature Lockdown Capability: Not Supported 00:16:03.194 Abort Command Limit: 4 00:16:03.194 Async Event Request Limit: 4 00:16:03.194 Number of Firmware Slots: N/A 00:16:03.194 Firmware Slot 1 Read-Only: N/A 00:16:03.194 Firmware Activation Without Reset: N/A 00:16:03.194 Multiple Update Detection Support: N/A 00:16:03.194 Firmware Update Granularity: No Information Provided 00:16:03.194 Per-Namespace SMART Log: Yes 00:16:03.194 Asymmetric Namespace Access Log Page: Supported 00:16:03.194 ANA Transition Time : 10 sec 00:16:03.194 00:16:03.194 Asymmetric Namespace Access Capabilities 00:16:03.194 ANA Optimized State : Supported 00:16:03.194 ANA Non-Optimized State : Supported 00:16:03.194 ANA Inaccessible State : Supported 00:16:03.194 ANA Persistent Loss State : Supported 00:16:03.194 ANA Change State : Supported 00:16:03.194 ANAGRPID is not changed : No 00:16:03.194 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:03.194 00:16:03.194 ANA Group Identifier Maximum : 128 00:16:03.194 Number of ANA Group Identifiers : 128 00:16:03.194 Max Number of Allowed Namespaces : 1024 00:16:03.194 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:03.194 Command Effects Log Page: Supported 00:16:03.194 Get Log Page Extended Data: Supported 00:16:03.194 Telemetry Log Pages: Not Supported 00:16:03.194 Persistent Event Log Pages: Not Supported 00:16:03.194 Supported Log Pages Log Page: May Support 00:16:03.194 Commands Supported & Effects Log Page: Not Supported 00:16:03.194 Feature Identifiers & Effects Log Page:May Support 00:16:03.194 NVMe-MI Commands & Effects Log Page: May Support 00:16:03.194 Data Area 4 for Telemetry Log: Not Supported 00:16:03.194 Error Log Page Entries Supported: 128 00:16:03.194 Keep Alive: Supported 00:16:03.194 Keep Alive Granularity: 1000 ms 00:16:03.194 00:16:03.194 NVM Command Set Attributes 00:16:03.194 ========================== 00:16:03.194 Submission Queue Entry Size 00:16:03.194 Max: 64 00:16:03.194 Min: 64 00:16:03.194 Completion Queue Entry Size 00:16:03.194 Max: 16 00:16:03.194 Min: 16 00:16:03.194 Number of Namespaces: 1024 00:16:03.194 Compare Command: Not Supported 00:16:03.194 Write Uncorrectable Command: Not Supported 00:16:03.194 Dataset Management Command: Supported 00:16:03.194 Write Zeroes Command: Supported 00:16:03.194 Set Features Save Field: Not Supported 00:16:03.194 Reservations: Not Supported 00:16:03.194 Timestamp: Not Supported 00:16:03.194 Copy: Not Supported 00:16:03.194 Volatile Write Cache: Present 00:16:03.194 Atomic Write Unit (Normal): 1 00:16:03.194 Atomic Write Unit (PFail): 1 00:16:03.194 Atomic Compare & Write Unit: 1 00:16:03.194 Fused Compare & Write: Not Supported 00:16:03.194 Scatter-Gather List 00:16:03.194 SGL Command Set: Supported 00:16:03.194 SGL Keyed: Not Supported 00:16:03.194 SGL Bit Bucket Descriptor: Not Supported 00:16:03.194 SGL Metadata Pointer: Not Supported 00:16:03.194 Oversized SGL: Not Supported 00:16:03.194 SGL Metadata Address: Not Supported 00:16:03.194 SGL Offset: Supported 00:16:03.194 Transport SGL Data Block: Not Supported 00:16:03.194 Replay Protected Memory Block: Not Supported 00:16:03.194 00:16:03.194 Firmware Slot Information 00:16:03.194 ========================= 00:16:03.194 Active slot: 0 00:16:03.194 00:16:03.194 Asymmetric Namespace Access 00:16:03.194 =========================== 00:16:03.194 Change Count : 0 00:16:03.194 Number of ANA Group Descriptors : 1 00:16:03.194 ANA Group Descriptor : 0 00:16:03.194 ANA Group ID : 1 00:16:03.194 Number of NSID Values : 1 00:16:03.194 Change Count : 0 00:16:03.194 ANA State : 1 00:16:03.194 Namespace Identifier : 1 00:16:03.194 00:16:03.194 Commands Supported and Effects 00:16:03.194 ============================== 00:16:03.194 Admin Commands 00:16:03.194 -------------- 00:16:03.194 Get Log Page (02h): Supported 00:16:03.194 Identify (06h): Supported 00:16:03.194 Abort (08h): Supported 00:16:03.194 Set Features (09h): Supported 00:16:03.194 Get Features (0Ah): Supported 00:16:03.194 Asynchronous Event Request (0Ch): Supported 00:16:03.194 Keep Alive (18h): Supported 00:16:03.194 I/O Commands 00:16:03.194 ------------ 00:16:03.194 Flush (00h): Supported 00:16:03.194 Write (01h): Supported LBA-Change 00:16:03.194 Read (02h): Supported 00:16:03.194 Write Zeroes (08h): Supported LBA-Change 00:16:03.194 Dataset Management (09h): Supported 00:16:03.194 00:16:03.194 Error Log 00:16:03.194 ========= 00:16:03.194 Entry: 0 00:16:03.194 Error Count: 0x3 00:16:03.194 Submission Queue Id: 0x0 00:16:03.194 Command Id: 0x5 00:16:03.194 Phase Bit: 0 00:16:03.194 Status Code: 0x2 00:16:03.194 Status Code Type: 0x0 00:16:03.194 Do Not Retry: 1 00:16:03.194 Error Location: 0x28 00:16:03.194 LBA: 0x0 00:16:03.194 Namespace: 0x0 00:16:03.194 Vendor Log Page: 0x0 00:16:03.194 ----------- 00:16:03.194 Entry: 1 00:16:03.194 Error Count: 0x2 00:16:03.194 Submission Queue Id: 0x0 00:16:03.194 Command Id: 0x5 00:16:03.194 Phase Bit: 0 00:16:03.194 Status Code: 0x2 00:16:03.194 Status Code Type: 0x0 00:16:03.194 Do Not Retry: 1 00:16:03.194 Error Location: 0x28 00:16:03.194 LBA: 0x0 00:16:03.194 Namespace: 0x0 00:16:03.194 Vendor Log Page: 0x0 00:16:03.194 ----------- 00:16:03.194 Entry: 2 00:16:03.194 Error Count: 0x1 00:16:03.194 Submission Queue Id: 0x0 00:16:03.194 Command Id: 0x4 00:16:03.194 Phase Bit: 0 00:16:03.194 Status Code: 0x2 00:16:03.194 Status Code Type: 0x0 00:16:03.194 Do Not Retry: 1 00:16:03.194 Error Location: 0x28 00:16:03.194 LBA: 0x0 00:16:03.194 Namespace: 0x0 00:16:03.194 Vendor Log Page: 0x0 00:16:03.194 00:16:03.194 Number of Queues 00:16:03.194 ================ 00:16:03.194 Number of I/O Submission Queues: 128 00:16:03.194 Number of I/O Completion Queues: 128 00:16:03.194 00:16:03.194 ZNS Specific Controller Data 00:16:03.194 ============================ 00:16:03.194 Zone Append Size Limit: 0 00:16:03.194 00:16:03.194 00:16:03.194 Active Namespaces 00:16:03.194 ================= 00:16:03.194 get_feature(0x05) failed 00:16:03.194 Namespace ID:1 00:16:03.194 Command Set Identifier: NVM (00h) 00:16:03.194 Deallocate: Supported 00:16:03.194 Deallocated/Unwritten Error: Not Supported 00:16:03.194 Deallocated Read Value: Unknown 00:16:03.194 Deallocate in Write Zeroes: Not Supported 00:16:03.194 Deallocated Guard Field: 0xFFFF 00:16:03.194 Flush: Supported 00:16:03.194 Reservation: Not Supported 00:16:03.194 Namespace Sharing Capabilities: Multiple Controllers 00:16:03.194 Size (in LBAs): 1310720 (5GiB) 00:16:03.194 Capacity (in LBAs): 1310720 (5GiB) 00:16:03.194 Utilization (in LBAs): 1310720 (5GiB) 00:16:03.194 UUID: dd10d589-3fa8-4463-af96-b130b281a017 00:16:03.194 Thin Provisioning: Not Supported 00:16:03.194 Per-NS Atomic Units: Yes 00:16:03.194 Atomic Boundary Size (Normal): 0 00:16:03.194 Atomic Boundary Size (PFail): 0 00:16:03.194 Atomic Boundary Offset: 0 00:16:03.194 NGUID/EUI64 Never Reused: No 00:16:03.194 ANA group ID: 1 00:16:03.194 Namespace Write Protected: No 00:16:03.194 Number of LBA Formats: 1 00:16:03.194 Current LBA Format: LBA Format #00 00:16:03.194 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:03.194 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.194 rmmod nvme_tcp 00:16:03.194 rmmod nvme_fabrics 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:03.194 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:03.195 21:29:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:04.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:04.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.129 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.387 00:16:04.387 real 0m3.318s 00:16:04.387 user 0m1.088s 00:16:04.387 sys 0m1.752s 00:16:04.387 21:29:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.387 21:29:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.387 ************************************ 00:16:04.387 END TEST nvmf_identify_kernel_target 00:16:04.387 ************************************ 00:16:04.387 21:29:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.387 21:29:37 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:04.387 21:29:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.387 21:29:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.387 21:29:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.387 ************************************ 00:16:04.387 START TEST nvmf_auth_host 00:16:04.387 ************************************ 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:04.387 * Looking for test storage... 00:16:04.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.387 21:29:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.388 21:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:04.646 Cannot find device "nvmf_tgt_br" 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.646 Cannot find device "nvmf_tgt_br2" 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:04.646 Cannot find device "nvmf_tgt_br" 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:04.646 Cannot find device "nvmf_tgt_br2" 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.646 21:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.904 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:04.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:16:04.904 00:16:04.905 --- 10.0.0.2 ping statistics --- 00:16:04.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.905 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:04.905 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.905 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:04.905 00:16:04.905 --- 10.0.0.3 ping statistics --- 00:16:04.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.905 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:16:04.905 00:16:04.905 --- 10.0.0.1 ping statistics --- 00:16:04.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.905 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77909 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77909 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 77909 ']' 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.905 21:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.863 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ad2604bcceb669f965c7d24629774fa6 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bAt 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ad2604bcceb669f965c7d24629774fa6 0 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ad2604bcceb669f965c7d24629774fa6 0 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ad2604bcceb669f965c7d24629774fa6 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bAt 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bAt 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.bAt 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=56a43b9ae561434cc84b498016c3cf103fadb8effb51f2f82b87ab284a24b7f4 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.76c 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 56a43b9ae561434cc84b498016c3cf103fadb8effb51f2f82b87ab284a24b7f4 3 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 56a43b9ae561434cc84b498016c3cf103fadb8effb51f2f82b87ab284a24b7f4 3 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=56a43b9ae561434cc84b498016c3cf103fadb8effb51f2f82b87ab284a24b7f4 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:05.864 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.76c 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.76c 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.76c 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6f23274e175def0251bc9c90d127d2a15c82416af3f3cfbe 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.m6i 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6f23274e175def0251bc9c90d127d2a15c82416af3f3cfbe 0 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6f23274e175def0251bc9c90d127d2a15c82416af3f3cfbe 0 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6f23274e175def0251bc9c90d127d2a15c82416af3f3cfbe 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.m6i 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.m6i 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.m6i 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=923595030c0dd62cb8f32e4c78e06efc45e9dd33f69469ff 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mZ6 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 923595030c0dd62cb8f32e4c78e06efc45e9dd33f69469ff 2 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 923595030c0dd62cb8f32e4c78e06efc45e9dd33f69469ff 2 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=923595030c0dd62cb8f32e4c78e06efc45e9dd33f69469ff 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mZ6 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mZ6 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mZ6 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c2ba2d2fbfb918599e55f38615d9a1c6 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Gwj 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c2ba2d2fbfb918599e55f38615d9a1c6 1 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c2ba2d2fbfb918599e55f38615d9a1c6 1 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c2ba2d2fbfb918599e55f38615d9a1c6 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Gwj 00:16:06.123 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Gwj 00:16:06.382 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Gwj 00:16:06.382 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:06.382 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:06.382 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.382 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2a68f3359b813cdcf8b63903c463bd67 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dZ0 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2a68f3359b813cdcf8b63903c463bd67 1 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2a68f3359b813cdcf8b63903c463bd67 1 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2a68f3359b813cdcf8b63903c463bd67 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dZ0 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dZ0 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.dZ0 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=578da5ca7eace5a822d46da7d643c6d0562625b7d27ff92e 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ivk 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 578da5ca7eace5a822d46da7d643c6d0562625b7d27ff92e 2 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 578da5ca7eace5a822d46da7d643c6d0562625b7d27ff92e 2 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=578da5ca7eace5a822d46da7d643c6d0562625b7d27ff92e 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ivk 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ivk 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ivk 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eefb98e0e498ef3cad7d4251a0abfef3 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EVp 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eefb98e0e498ef3cad7d4251a0abfef3 0 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eefb98e0e498ef3cad7d4251a0abfef3 0 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eefb98e0e498ef3cad7d4251a0abfef3 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EVp 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EVp 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.EVp 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6b81ac8cd8afd8e9af26be6f4ccdf40d929b1eebc4aff14021b32b433a44b630 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6pM 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6b81ac8cd8afd8e9af26be6f4ccdf40d929b1eebc4aff14021b32b433a44b630 3 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6b81ac8cd8afd8e9af26be6f4ccdf40d929b1eebc4aff14021b32b433a44b630 3 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:06.383 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6b81ac8cd8afd8e9af26be6f4ccdf40d929b1eebc4aff14021b32b433a44b630 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6pM 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6pM 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6pM 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77909 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 77909 ']' 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.642 21:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bAt 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.76c ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.76c 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.m6i 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mZ6 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mZ6 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Gwj 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.dZ0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dZ0 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ivk 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.EVp ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.EVp 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6pM 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:06.901 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:06.902 21:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:07.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:07.469 Waiting for block devices as requested 00:16:07.469 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:07.728 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:08.300 No valid GPT data, bailing 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:08.300 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:08.559 No valid GPT data, bailing 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:08.559 No valid GPT data, bailing 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:08.559 No valid GPT data, bailing 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:08.559 21:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -a 10.0.0.1 -t tcp -s 4420 00:16:08.818 00:16:08.818 Discovery Log Number of Records 2, Generation counter 2 00:16:08.818 =====Discovery Log Entry 0====== 00:16:08.818 trtype: tcp 00:16:08.818 adrfam: ipv4 00:16:08.818 subtype: current discovery subsystem 00:16:08.818 treq: not specified, sq flow control disable supported 00:16:08.818 portid: 1 00:16:08.818 trsvcid: 4420 00:16:08.818 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:08.818 traddr: 10.0.0.1 00:16:08.818 eflags: none 00:16:08.818 sectype: none 00:16:08.818 =====Discovery Log Entry 1====== 00:16:08.818 trtype: tcp 00:16:08.818 adrfam: ipv4 00:16:08.818 subtype: nvme subsystem 00:16:08.818 treq: not specified, sq flow control disable supported 00:16:08.818 portid: 1 00:16:08.818 trsvcid: 4420 00:16:08.818 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:08.818 traddr: 10.0.0.1 00:16:08.818 eflags: none 00:16:08.818 sectype: none 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:08.818 21:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:08.818 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:08.818 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.819 nvme0n1 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.819 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.078 nvme0n1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.078 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.337 nvme0n1 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.337 nvme0n1 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.337 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.596 nvme0n1 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.596 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.597 21:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.855 nvme0n1 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.855 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.856 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.113 nvme0n1 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.113 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.114 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 nvme0n1 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.371 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.372 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.630 nvme0n1 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.630 21:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.631 21:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:10.631 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.631 21:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.889 nvme0n1 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.889 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.890 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.148 nvme0n1 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.148 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.714 nvme0n1 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.714 21:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.714 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.972 nvme0n1 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.972 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.230 nvme0n1 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.230 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.488 nvme0n1 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.488 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.746 nvme0n1 00:16:12.746 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.746 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.746 21:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.746 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.746 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.747 21:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.747 21:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.655 nvme0n1 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:14.655 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.656 21:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 nvme0n1 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.913 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.170 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 nvme0n1 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.498 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.757 nvme0n1 00:16:15.757 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.757 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.757 21:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.757 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.757 21:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.757 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.015 nvme0n1 00:16:16.015 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.015 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.015 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.015 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.015 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.273 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 nvme0n1 00:16:16.837 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.837 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.837 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.837 21:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.837 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 21:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.837 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.402 nvme0n1 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.402 21:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 nvme0n1 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.984 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.551 nvme0n1 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.551 21:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.116 nvme0n1 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.116 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.375 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.376 nvme0n1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.376 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 nvme0n1 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.634 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 nvme0n1 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.635 21:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:19.901 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.902 nvme0n1 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.902 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.160 nvme0n1 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.160 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 nvme0n1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 nvme0n1 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.420 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.692 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.693 nvme0n1 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:20.693 21:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.693 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.951 nvme0n1 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:20.951 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.952 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.209 nvme0n1 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.209 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.468 nvme0n1 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.468 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.726 nvme0n1 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.726 21:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.726 nvme0n1 00:16:21.726 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.726 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.726 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.726 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.726 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.984 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.985 nvme0n1 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.985 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.243 nvme0n1 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.243 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:22.501 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.502 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.760 nvme0n1 00:16:22.760 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.760 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.760 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.760 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.760 21:29:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.760 21:29:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.760 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.019 nvme0n1 00:16:23.019 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.019 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.019 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.019 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.019 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.019 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.277 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.536 nvme0n1 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:23.536 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.537 21:29:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.795 nvme0n1 00:16:23.795 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.795 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.795 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.795 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.795 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.795 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:24.055 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.056 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.315 nvme0n1 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.315 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.316 21:29:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.884 nvme0n1 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.884 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.451 nvme0n1 00:16:25.451 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.451 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.452 21:29:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.020 nvme0n1 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.020 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.589 nvme0n1 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.589 21:29:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.155 nvme0n1 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.155 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.156 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.416 nvme0n1 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.416 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.675 nvme0n1 00:16:27.675 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.675 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.675 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.675 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.675 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.675 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 nvme0n1 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 21:30:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.676 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.935 nvme0n1 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.935 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.936 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.195 nvme0n1 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.195 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.196 nvme0n1 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.196 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.455 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.456 nvme0n1 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.456 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.717 nvme0n1 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.717 21:30:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.976 nvme0n1 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.976 nvme0n1 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.976 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.977 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.977 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.977 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:29.235 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.236 nvme0n1 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.236 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 nvme0n1 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:29.495 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:29.754 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:29.754 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:29.754 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.754 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.754 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.754 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:29.754 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.755 21:30:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.755 nvme0n1 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.755 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.014 nvme0n1 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.014 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.015 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.015 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.015 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 nvme0n1 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.274 21:30:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.532 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.532 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.532 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.791 nvme0n1 00:16:30.791 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.791 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.791 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.791 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.792 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.792 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.792 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.792 21:30:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.792 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.792 21:30:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.792 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.051 nvme0n1 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.051 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.052 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.620 nvme0n1 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.620 21:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.879 nvme0n1 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.879 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.139 nvme0n1 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.139 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:32.396 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWQyNjA0YmNjZWI2NjlmOTY1YzdkMjQ2Mjk3NzRmYTaugZDM: 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: ]] 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTZhNDNiOWFlNTYxNDM0Y2M4NGI0OTgwMTZjM2NmMTAzZmFkYjhlZmZiNTFmMmY4MmI4N2FiMjg0YTI0YjdmNJSjwJs=: 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.397 21:30:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.964 nvme0n1 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.964 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.531 nvme0n1 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJiYTJkMmZiZmI5MTg1OTllNTVmMzg2MTVkOWExYzarOPQ2: 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmE2OGYzMzU5YjgxM2NkY2Y4YjYzOTAzYzQ2M2JkNjclaG8P: 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.531 21:30:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.097 nvme0n1 00:16:34.097 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.097 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc4ZGE1Y2E3ZWFjZTVhODIyZDQ2ZGE3ZDY0M2M2ZDA1NjI2MjViN2QyN2ZmOTJl1AoUsg==: 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWVmYjk4ZTBlNDk4ZWYzY2FkN2Q0MjUxYTBhYmZlZjOn0B5b: 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.098 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.664 nvme0n1 00:16:34.664 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.664 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI4MWFjOGNkOGFmZDhlOWFmMjZiZTZmNGNjZGY0MGQ5MjliMWVlYmM0YWZmMTQwMjFiMzJiNDMzYTQ0YjYzMD80DGc=: 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.665 21:30:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.233 nvme0n1 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmYyMzI3NGUxNzVkZWYwMjUxYmM5YzkwZDEyN2QyYTE1YzgyNDE2YWYzZjNjZmJlHtiZSQ==: 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTIzNTk1MDMwYzBkZDYyY2I4ZjMyZTRjNzhlMDZlZmM0NWU5ZGQzM2Y2OTQ2OWZm9yoYCQ==: 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.233 request: 00:16:35.233 { 00:16:35.233 "name": "nvme0", 00:16:35.233 "trtype": "tcp", 00:16:35.233 "traddr": "10.0.0.1", 00:16:35.233 "adrfam": "ipv4", 00:16:35.233 "trsvcid": "4420", 00:16:35.233 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:35.233 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:35.233 "prchk_reftag": false, 00:16:35.233 "prchk_guard": false, 00:16:35.233 "hdgst": false, 00:16:35.233 "ddgst": false, 00:16:35.233 "method": "bdev_nvme_attach_controller", 00:16:35.233 "req_id": 1 00:16:35.233 } 00:16:35.233 Got JSON-RPC error response 00:16:35.233 response: 00:16:35.233 { 00:16:35.233 "code": -5, 00:16:35.233 "message": "Input/output error" 00:16:35.233 } 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.233 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.493 request: 00:16:35.493 { 00:16:35.493 "name": "nvme0", 00:16:35.493 "trtype": "tcp", 00:16:35.493 "traddr": "10.0.0.1", 00:16:35.493 "adrfam": "ipv4", 00:16:35.493 "trsvcid": "4420", 00:16:35.493 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:35.493 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:35.493 "prchk_reftag": false, 00:16:35.493 "prchk_guard": false, 00:16:35.493 "hdgst": false, 00:16:35.493 "ddgst": false, 00:16:35.493 "dhchap_key": "key2", 00:16:35.493 "method": "bdev_nvme_attach_controller", 00:16:35.493 "req_id": 1 00:16:35.493 } 00:16:35.493 Got JSON-RPC error response 00:16:35.493 response: 00:16:35.493 { 00:16:35.493 "code": -5, 00:16:35.493 "message": "Input/output error" 00:16:35.493 } 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.493 request: 00:16:35.493 { 00:16:35.493 "name": "nvme0", 00:16:35.493 "trtype": "tcp", 00:16:35.493 "traddr": "10.0.0.1", 00:16:35.493 "adrfam": "ipv4", 00:16:35.493 "trsvcid": "4420", 00:16:35.493 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:35.493 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:35.493 "prchk_reftag": false, 00:16:35.493 "prchk_guard": false, 00:16:35.493 "hdgst": false, 00:16:35.493 "ddgst": false, 00:16:35.493 "dhchap_key": "key1", 00:16:35.493 "dhchap_ctrlr_key": "ckey2", 00:16:35.493 "method": "bdev_nvme_attach_controller", 00:16:35.493 "req_id": 1 00:16:35.493 } 00:16:35.493 Got JSON-RPC error response 00:16:35.493 response: 00:16:35.493 { 00:16:35.493 "code": -5, 00:16:35.493 "message": "Input/output error" 00:16:35.493 } 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.493 rmmod nvme_tcp 00:16:35.493 rmmod nvme_fabrics 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77909 ']' 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77909 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 77909 ']' 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 77909 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.493 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77909 00:16:35.751 killing process with pid 77909 00:16:35.751 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:35.752 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:35.752 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77909' 00:16:35.752 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 77909 00:16:35.752 21:30:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 77909 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:35.752 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:16:36.009 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:36.009 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:36.009 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:36.009 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:36.009 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:36.009 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:36.009 21:30:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:36.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.945 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:36.945 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:36.945 21:30:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.bAt /tmp/spdk.key-null.m6i /tmp/spdk.key-sha256.Gwj /tmp/spdk.key-sha384.Ivk /tmp/spdk.key-sha512.6pM /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:36.945 21:30:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:37.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:37.514 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:37.514 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:37.514 00:16:37.514 real 0m33.220s 00:16:37.514 user 0m30.296s 00:16:37.514 sys 0m4.789s 00:16:37.514 ************************************ 00:16:37.514 END TEST nvmf_auth_host 00:16:37.514 ************************************ 00:16:37.514 21:30:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.514 21:30:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.514 21:30:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:37.514 21:30:10 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:16:37.514 21:30:10 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:37.514 21:30:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:37.514 21:30:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.514 21:30:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:37.514 ************************************ 00:16:37.514 START TEST nvmf_digest 00:16:37.514 ************************************ 00:16:37.514 21:30:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:37.773 * Looking for test storage... 00:16:37.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:16:37.773 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:37.774 Cannot find device "nvmf_tgt_br" 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.774 Cannot find device "nvmf_tgt_br2" 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:37.774 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:38.033 Cannot find device "nvmf_tgt_br" 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:38.033 Cannot find device "nvmf_tgt_br2" 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.033 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.034 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.292 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.292 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.292 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.292 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:38.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:16:38.292 00:16:38.292 --- 10.0.0.2 ping statistics --- 00:16:38.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.293 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:38.293 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.293 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:16:38.293 00:16:38.293 --- 10.0.0.3 ping statistics --- 00:16:38.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.293 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:16:38.293 00:16:38.293 --- 10.0.0.1 ping statistics --- 00:16:38.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.293 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:38.293 ************************************ 00:16:38.293 START TEST nvmf_digest_clean 00:16:38.293 ************************************ 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79467 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79467 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79467 ']' 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.293 21:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.293 [2024-07-15 21:30:11.579247] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:38.293 [2024-07-15 21:30:11.579325] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.552 [2024-07-15 21:30:11.722608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.552 [2024-07-15 21:30:11.823105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.552 [2024-07-15 21:30:11.823161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.552 [2024-07-15 21:30:11.823174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.552 [2024-07-15 21:30:11.823185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.552 [2024-07-15 21:30:11.823194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.552 [2024-07-15 21:30:11.823229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.156 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:39.415 [2024-07-15 21:30:12.545335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:39.415 null0 00:16:39.415 [2024-07-15 21:30:12.588597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.415 [2024-07-15 21:30:12.612671] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79499 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79499 /var/tmp/bperf.sock 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79499 ']' 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:39.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.415 21:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:39.415 [2024-07-15 21:30:12.667525] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:39.415 [2024-07-15 21:30:12.667774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79499 ] 00:16:39.673 [2024-07-15 21:30:12.810902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.673 [2024-07-15 21:30:12.907250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.241 21:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.241 21:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:40.241 21:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:40.241 21:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:40.241 21:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:40.500 [2024-07-15 21:30:13.771887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:40.500 21:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.500 21:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:40.758 nvme0n1 00:16:40.758 21:30:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:40.758 21:30:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:41.016 Running I/O for 2 seconds... 00:16:42.921 00:16:42.921 Latency(us) 00:16:42.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.921 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:42.921 nvme0n1 : 2.00 19071.42 74.50 0.00 0.00 6707.19 6132.49 16318.20 00:16:42.921 =================================================================================================================== 00:16:42.921 Total : 19071.42 74.50 0.00 0.00 6707.19 6132.49 16318.20 00:16:42.921 0 00:16:42.921 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:42.921 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:42.921 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:42.921 | select(.opcode=="crc32c") 00:16:42.921 | "\(.module_name) \(.executed)"' 00:16:42.921 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:42.921 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79499 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79499 ']' 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79499 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79499 00:16:43.180 killing process with pid 79499 00:16:43.180 Received shutdown signal, test time was about 2.000000 seconds 00:16:43.180 00:16:43.180 Latency(us) 00:16:43.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.180 =================================================================================================================== 00:16:43.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79499' 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79499 00:16:43.180 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79499 00:16:43.439 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:43.439 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:43.439 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:43.439 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:43.439 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79559 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79559 /var/tmp/bperf.sock 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79559 ']' 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:43.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.440 21:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:43.440 [2024-07-15 21:30:16.712587] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:43.440 [2024-07-15 21:30:16.713505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79559 ] 00:16:43.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:43.440 Zero copy mechanism will not be used. 00:16:43.699 [2024-07-15 21:30:16.857096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.699 [2024-07-15 21:30:16.958043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.266 21:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.266 21:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:44.266 21:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:44.266 21:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:44.266 21:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:44.523 [2024-07-15 21:30:17.829013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:44.523 21:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.523 21:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.783 nvme0n1 00:16:45.042 21:30:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:45.042 21:30:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:45.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:45.042 Zero copy mechanism will not be used. 00:16:45.042 Running I/O for 2 seconds... 00:16:46.942 00:16:46.942 Latency(us) 00:16:46.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.942 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:46.942 nvme0n1 : 2.00 8570.06 1071.26 0.00 0.00 1864.17 1750.26 4263.79 00:16:46.942 =================================================================================================================== 00:16:46.942 Total : 8570.06 1071.26 0.00 0.00 1864.17 1750.26 4263.79 00:16:46.942 0 00:16:46.942 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:46.942 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:46.942 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:46.942 | select(.opcode=="crc32c") 00:16:46.942 | "\(.module_name) \(.executed)"' 00:16:46.942 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:46.942 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79559 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79559 ']' 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79559 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79559 00:16:47.200 killing process with pid 79559 00:16:47.200 Received shutdown signal, test time was about 2.000000 seconds 00:16:47.200 00:16:47.200 Latency(us) 00:16:47.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.200 =================================================================================================================== 00:16:47.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79559' 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79559 00:16:47.200 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79559 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79614 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79614 /var/tmp/bperf.sock 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79614 ']' 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:47.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.457 21:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:47.457 [2024-07-15 21:30:20.772300] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:47.457 [2024-07-15 21:30:20.773296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79614 ] 00:16:47.715 [2024-07-15 21:30:20.915437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.715 [2024-07-15 21:30:21.014266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.647 21:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.647 21:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:48.648 21:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:48.648 21:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:48.648 21:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:48.648 [2024-07-15 21:30:21.898180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:48.648 21:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.648 21:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.906 nvme0n1 00:16:48.906 21:30:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:48.906 21:30:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:49.164 Running I/O for 2 seconds... 00:16:51.088 00:16:51.088 Latency(us) 00:16:51.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.088 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.088 nvme0n1 : 2.01 20887.95 81.59 0.00 0.00 6123.09 4579.62 11738.58 00:16:51.088 =================================================================================================================== 00:16:51.088 Total : 20887.95 81.59 0.00 0.00 6123.09 4579.62 11738.58 00:16:51.088 0 00:16:51.088 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:51.088 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:51.088 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:51.088 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:51.088 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:51.088 | select(.opcode=="crc32c") 00:16:51.088 | "\(.module_name) \(.executed)"' 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79614 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79614 ']' 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79614 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79614 00:16:51.347 killing process with pid 79614 00:16:51.347 Received shutdown signal, test time was about 2.000000 seconds 00:16:51.347 00:16:51.347 Latency(us) 00:16:51.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.347 =================================================================================================================== 00:16:51.347 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79614' 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79614 00:16:51.347 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79614 00:16:51.606 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:51.606 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:51.606 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:51.606 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:51.606 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:51.606 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:51.606 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79674 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79674 /var/tmp/bperf.sock 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 79674 ']' 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:51.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.607 21:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:51.607 [2024-07-15 21:30:24.805804] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:51.607 [2024-07-15 21:30:24.806603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79674 ] 00:16:51.607 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:51.607 Zero copy mechanism will not be used. 00:16:51.607 [2024-07-15 21:30:24.950092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.864 [2024-07-15 21:30:25.047056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.430 21:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.430 21:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:16:52.430 21:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:52.430 21:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:52.430 21:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:52.689 [2024-07-15 21:30:25.951880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:52.689 21:30:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:52.689 21:30:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:52.948 nvme0n1 00:16:52.948 21:30:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:52.948 21:30:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:53.208 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:53.208 Zero copy mechanism will not be used. 00:16:53.208 Running I/O for 2 seconds... 00:16:55.111 00:16:55.111 Latency(us) 00:16:55.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.111 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:55.111 nvme0n1 : 2.00 6166.97 770.87 0.00 0.00 2590.20 1394.94 10475.23 00:16:55.111 =================================================================================================================== 00:16:55.111 Total : 6166.97 770.87 0.00 0.00 2590.20 1394.94 10475.23 00:16:55.111 0 00:16:55.111 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:55.111 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:55.111 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:55.111 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:55.111 | select(.opcode=="crc32c") 00:16:55.111 | "\(.module_name) \(.executed)"' 00:16:55.111 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79674 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79674 ']' 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79674 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79674 00:16:55.371 killing process with pid 79674 00:16:55.371 Received shutdown signal, test time was about 2.000000 seconds 00:16:55.371 00:16:55.371 Latency(us) 00:16:55.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.371 =================================================================================================================== 00:16:55.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79674' 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79674 00:16:55.371 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79674 00:16:55.630 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79467 00:16:55.630 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 79467 ']' 00:16:55.630 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 79467 00:16:55.630 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:16:55.630 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.630 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79467 00:16:55.890 killing process with pid 79467 00:16:55.890 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:55.890 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:55.890 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79467' 00:16:55.890 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 79467 00:16:55.890 21:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 79467 00:16:55.890 00:16:55.890 real 0m17.673s 00:16:55.890 user 0m32.625s 00:16:55.890 sys 0m5.513s 00:16:55.890 ************************************ 00:16:55.890 END TEST nvmf_digest_clean 00:16:55.890 ************************************ 00:16:55.890 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.890 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:55.890 21:30:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:16:55.890 21:30:29 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:55.890 21:30:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:55.890 21:30:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:55.890 21:30:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:56.150 ************************************ 00:16:56.150 START TEST nvmf_digest_error 00:16:56.150 ************************************ 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79757 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79757 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79757 ']' 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.150 21:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.150 [2024-07-15 21:30:29.331422] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:56.150 [2024-07-15 21:30:29.331512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.150 [2024-07-15 21:30:29.472404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.409 [2024-07-15 21:30:29.570625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.409 [2024-07-15 21:30:29.570681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.409 [2024-07-15 21:30:29.570698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.409 [2024-07-15 21:30:29.570711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.409 [2024-07-15 21:30:29.570722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.409 [2024-07-15 21:30:29.570756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.978 [2024-07-15 21:30:30.330076] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.978 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:57.238 [2024-07-15 21:30:30.383148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:57.238 null0 00:16:57.238 [2024-07-15 21:30:30.431602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.238 [2024-07-15 21:30:30.455611] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79795 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79795 /var/tmp/bperf.sock 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79795 ']' 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:57.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.238 21:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:57.238 [2024-07-15 21:30:30.510615] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:16:57.238 [2024-07-15 21:30:30.511032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79795 ] 00:16:57.497 [2024-07-15 21:30:30.641121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.497 [2024-07-15 21:30:30.789569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.497 [2024-07-15 21:30:30.861630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:58.064 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.064 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:16:58.064 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.064 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.324 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:58.324 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.324 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.324 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.324 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.324 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.583 nvme0n1 00:16:58.583 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:58.583 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.584 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.843 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.843 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:58.843 21:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:58.843 Running I/O for 2 seconds... 00:16:58.843 [2024-07-15 21:30:32.092913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.092995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.093010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.106744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.106816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.106845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.120298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.120382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.134480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.134533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.134546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.147747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.147792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.147805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.160971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.161008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.161020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.174198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.174241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.174254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.187592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.187635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.187648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.843 [2024-07-15 21:30:32.200918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:58.843 [2024-07-15 21:30:32.200955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.843 [2024-07-15 21:30:32.200968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.214105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.214138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-07-15 21:30:32.214151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.227350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.227391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-07-15 21:30:32.227404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.240720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.240763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-07-15 21:30:32.240777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.254257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.254301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-07-15 21:30:32.254314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.267680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.267719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-07-15 21:30:32.267731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.281466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.281510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-07-15 21:30:32.281523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.294695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.294738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.103 [2024-07-15 21:30:32.294750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.103 [2024-07-15 21:30:32.308123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.103 [2024-07-15 21:30:32.308172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.308185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.321700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.321761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.321774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.335107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.335153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.335167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.349748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.349809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.349878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.364208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.364267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.364287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.378855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.378906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.378927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.393529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.393585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.393606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.408352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.408410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.408428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.422782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.422876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.422898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.437444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.437518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.437538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.452127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.452203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.452222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.104 [2024-07-15 21:30:32.466663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.104 [2024-07-15 21:30:32.466743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.104 [2024-07-15 21:30:32.466763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.481303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.481376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.481396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.495981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.496062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.496083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.521072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.521166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.521194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.536842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.536917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.536933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.550405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.550475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.550488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.563831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.563901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.563915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.577204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.577273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.577287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.590583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.590654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.590668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.603885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.603968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.617163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.617230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.617244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.630408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.630452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.630465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.643607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.643640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.643652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.656764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.656794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.656805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.669939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.669971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.669983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.683217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.683285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.683299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.696431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.696467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.364 [2024-07-15 21:30:32.696478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.364 [2024-07-15 21:30:32.709658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.364 [2024-07-15 21:30:32.709694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.365 [2024-07-15 21:30:32.709707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.365 [2024-07-15 21:30:32.722884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.365 [2024-07-15 21:30:32.722920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.365 [2024-07-15 21:30:32.722933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.736076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.736112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.736125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.749317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.749363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.749375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.762539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.762583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.762596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.775760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.775803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.775816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.788972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.789012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.802177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.802215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.802229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.815324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.815358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.815371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.828492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.828524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.828537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.841699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.841731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.841743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.854850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.854877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.854889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.867973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.868002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.868013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.881123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.881153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.881164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.894263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.894296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.894307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.907422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.907452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.907463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.920540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.920568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.920579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.933661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.933689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.933699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.946832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.946861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.946873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.966027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.966059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.966071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.979283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.979313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.625 [2024-07-15 21:30:32.979325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.625 [2024-07-15 21:30:32.992546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.625 [2024-07-15 21:30:32.992578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.626 [2024-07-15 21:30:32.992589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.005717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.005746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.005761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.018923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.018951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.018962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.032095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.032123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.032133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.045216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.045243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.045254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.058327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.058354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.058365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.071436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.071463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.071474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.084541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.084569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.084580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.097649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.097677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.097688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.110765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.110795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.110806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.123891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.123923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.123935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.137259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.137301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.137313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.150598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.150643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.150657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.163771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.163810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.163832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.176953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.176986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.176999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.190573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.190604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.190615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.203712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.203746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.203757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.216878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.216909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.216921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.230001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.230028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.230039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.900 [2024-07-15 21:30:33.243106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:16:59.900 [2024-07-15 21:30:33.243133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.900 [2024-07-15 21:30:33.243144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.256233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.256264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.256275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.269391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.269426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.269439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.282583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.282639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.282652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.295887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.295942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.295957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.309156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.309210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.309222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.322453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.322508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.322520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.335633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.335681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.335694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.348815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.348871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.348884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.362019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.362069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.362082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.375232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.375283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.375295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.388408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.388452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.388464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.401607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.401648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.401660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.414814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.414860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.414874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.428104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.428136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.428149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.441473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.441514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.441528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.454705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.454751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.454763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.467896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.467942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.467954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.481165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.481213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.481226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.494346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.494393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.494406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.507563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.507613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.507626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.520786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.520852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.520865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.177 [2024-07-15 21:30:33.534160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.177 [2024-07-15 21:30:33.534203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.177 [2024-07-15 21:30:33.534215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.547356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.547402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.547415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.560583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.560640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.560655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.573853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.573899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.573912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.587091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.587134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.587148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.600265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.600296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.600308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.613416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.613446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.613457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.626592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.626623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.626634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.639737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.639766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.639777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.652874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.652901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.652912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.665999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.666026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.666037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.679116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.679142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.679153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.692216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.692243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.692253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.705397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.705424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.705435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.718541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.718568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.436 [2024-07-15 21:30:33.731685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.436 [2024-07-15 21:30:33.731713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.436 [2024-07-15 21:30:33.731724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.437 [2024-07-15 21:30:33.744970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.437 [2024-07-15 21:30:33.744997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.437 [2024-07-15 21:30:33.745007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.437 [2024-07-15 21:30:33.758109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.437 [2024-07-15 21:30:33.758136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.437 [2024-07-15 21:30:33.758147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.437 [2024-07-15 21:30:33.771223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.437 [2024-07-15 21:30:33.771250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.437 [2024-07-15 21:30:33.771260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.437 [2024-07-15 21:30:33.784333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.437 [2024-07-15 21:30:33.784360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.437 [2024-07-15 21:30:33.784371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.437 [2024-07-15 21:30:33.797621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.437 [2024-07-15 21:30:33.797653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.437 [2024-07-15 21:30:33.797665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.816766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.816806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.816828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.830325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.830365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.830377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.843700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.843736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.843748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.856895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.856930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.856942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.870048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.870084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.870096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.883217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.883251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.883263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.896360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.896401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.896416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.909546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.909581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.909593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.922685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.922728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.922740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.935885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.935930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.935943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.949158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.949203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.949214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.962419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.962465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.696 [2024-07-15 21:30:33.962479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.696 [2024-07-15 21:30:33.975591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.696 [2024-07-15 21:30:33.975639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.697 [2024-07-15 21:30:33.975651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.697 [2024-07-15 21:30:33.988889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.697 [2024-07-15 21:30:33.988936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.697 [2024-07-15 21:30:33.988949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.697 [2024-07-15 21:30:34.002423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.697 [2024-07-15 21:30:34.002474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.697 [2024-07-15 21:30:34.002488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.697 [2024-07-15 21:30:34.015900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.697 [2024-07-15 21:30:34.015948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.697 [2024-07-15 21:30:34.015960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.697 [2024-07-15 21:30:34.029172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.697 [2024-07-15 21:30:34.029217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.697 [2024-07-15 21:30:34.029230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.697 [2024-07-15 21:30:34.042409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.697 [2024-07-15 21:30:34.042460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.697 [2024-07-15 21:30:34.042472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.697 [2024-07-15 21:30:34.055606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.697 [2024-07-15 21:30:34.055655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.697 [2024-07-15 21:30:34.055669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.956 [2024-07-15 21:30:34.068781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1282020) 00:17:00.956 [2024-07-15 21:30:34.068831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.956 [2024-07-15 21:30:34.068844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.956 00:17:00.956 Latency(us) 00:17:00.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.956 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:00.956 nvme0n1 : 2.01 18776.94 73.35 0.00 0.00 6813.08 6211.44 31583.61 00:17:00.956 =================================================================================================================== 00:17:00.956 Total : 18776.94 73.35 0.00 0.00 6813.08 6211.44 31583.61 00:17:00.956 0 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:00.956 | .driver_specific 00:17:00.956 | .nvme_error 00:17:00.956 | .status_code 00:17:00.956 | .command_transient_transport_error' 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79795 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79795 ']' 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79795 00:17:00.956 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:01.215 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.215 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79795 00:17:01.215 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:01.215 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:01.215 killing process with pid 79795 00:17:01.215 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79795' 00:17:01.215 Received shutdown signal, test time was about 2.000000 seconds 00:17:01.215 00:17:01.215 Latency(us) 00:17:01.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.215 =================================================================================================================== 00:17:01.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.215 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79795 00:17:01.215 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79795 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79850 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79850 /var/tmp/bperf.sock 00:17:01.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79850 ']' 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:01.475 21:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:01.475 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:01.475 Zero copy mechanism will not be used. 00:17:01.475 [2024-07-15 21:30:34.640286] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:01.475 [2024-07-15 21:30:34.640356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79850 ] 00:17:01.475 [2024-07-15 21:30:34.774284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.734 [2024-07-15 21:30:34.874588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.734 [2024-07-15 21:30:34.916771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:02.302 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.302 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:02.302 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.302 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.561 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:02.561 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.561 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:02.561 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.561 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.561 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.561 nvme0n1 00:17:02.820 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:02.820 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.820 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:02.820 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.820 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:02.820 21:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:02.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:02.820 Zero copy mechanism will not be used. 00:17:02.820 Running I/O for 2 seconds... 00:17:02.820 [2024-07-15 21:30:36.066868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.066921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.066935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.070690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.070732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.070744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.074467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.074503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.074515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.078128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.078164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.078175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.081886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.081919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.081930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.085641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.085676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.085687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.089425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.089463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.089474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.093161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.093196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.093207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.096978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.097010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.097022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.100665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.100696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.100707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.104436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.104471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.104482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.108190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.108223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.108234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.111878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.111910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.111921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.115552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.115587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.115599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.119214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.119248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.119260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.122904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.122935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.122946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.126602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.126646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.130285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.130319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.130329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.133995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.134026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.820 [2024-07-15 21:30:36.134037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.820 [2024-07-15 21:30:36.137630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.820 [2024-07-15 21:30:36.137665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.137675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.141304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.141338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.141349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.145001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.145033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.145044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.148680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.148713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.148724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.152333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.152367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.152378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.156007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.156040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.156051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.159640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.159674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.159685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.163261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.163296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.163307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.166902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.166935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.166946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.170575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.170608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.170619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.174298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.174331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.174342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.178002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.178034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.178045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.181699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.181732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.181743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.821 [2024-07-15 21:30:36.185577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:02.821 [2024-07-15 21:30:36.185610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.821 [2024-07-15 21:30:36.185621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.189661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.189699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.189723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.193563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.193598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.193610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.197355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.197390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.197402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.201152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.201188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.201200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.204839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.204871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.208522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.208555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.208566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.212201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.212235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.212246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.215888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.215922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.215932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.219583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.219617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.219627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.223246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.223279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.223289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.226939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.226971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.226981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.230565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.230599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.234247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.234282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.234293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.237943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.237975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.237987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.241621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.241656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.241666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.245315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.245349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.245359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.248981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.249012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.249023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.252685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.252717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.252728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.256335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.256367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.256378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.081 [2024-07-15 21:30:36.260053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.081 [2024-07-15 21:30:36.260087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.081 [2024-07-15 21:30:36.260098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.263746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.263780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.263791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.267460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.267493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.267504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.271159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.271191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.271202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.274829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.274859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.274870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.278487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.278521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.278532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.282131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.282164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.282175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.285803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.285846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.285857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.289507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.289540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.289551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.293207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.293241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.293252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.296899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.296931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.296941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.300551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.300584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.300595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.304216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.304248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.304259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.307992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.308024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.308034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.311697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.311729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.311740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.315357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.315389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.315399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.319014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.319046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.319057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.322669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.322702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.322713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.326430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.326463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.326474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.330122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.330156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.330167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.333782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.333829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.333842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.337439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.337472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.082 [2024-07-15 21:30:36.337482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.082 [2024-07-15 21:30:36.341081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.082 [2024-07-15 21:30:36.341121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.341132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.344783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.344815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.344838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.348467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.348500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.348511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.352167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.352199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.355862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.355893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.355904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.359500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.359531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.359542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.363141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.363175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.363186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.367046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.367081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.367092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.370691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.370724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.370735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.374383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.374417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.374428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.378069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.378103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.378114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.381739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.381772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.381783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.385458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.385491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.385501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.389157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.389191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.389201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.393051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.393085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.393097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.396869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.396903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.396914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.400617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.400653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.400665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.404327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.404359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.404370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.408158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.408191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.408202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.412020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.412052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.412062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.415798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.415841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.415852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.419652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.419685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.419696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.423494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.083 [2024-07-15 21:30:36.423529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.083 [2024-07-15 21:30:36.423540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.083 [2024-07-15 21:30:36.427448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.084 [2024-07-15 21:30:36.427478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-07-15 21:30:36.427489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.084 [2024-07-15 21:30:36.431241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.084 [2024-07-15 21:30:36.431272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-07-15 21:30:36.431282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.084 [2024-07-15 21:30:36.434955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.084 [2024-07-15 21:30:36.434982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-07-15 21:30:36.434993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.084 [2024-07-15 21:30:36.438628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.084 [2024-07-15 21:30:36.438657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-07-15 21:30:36.438667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.084 [2024-07-15 21:30:36.442378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.084 [2024-07-15 21:30:36.442408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-07-15 21:30:36.442420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.084 [2024-07-15 21:30:36.446203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.084 [2024-07-15 21:30:36.446233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-07-15 21:30:36.446243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.450096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.450138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.453846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.453874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.453885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.457758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.457787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.457797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.461708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.461737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.461748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.465632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.465661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.465671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.469385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.469416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.469426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.473109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.473141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.473152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.476865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.476909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.476921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.481077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.481114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.481126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.484868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.484896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.488744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.488774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.488785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.492533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.492568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.492580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.496291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.496321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.496332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.500061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.500091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.500102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.503893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.503923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.503935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.507684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.344 [2024-07-15 21:30:36.507715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.344 [2024-07-15 21:30:36.507726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.344 [2024-07-15 21:30:36.511510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.511542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.511553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.515350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.515382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.515394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.519123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.519154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.519165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.522960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.522988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.522999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.526795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.526843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.526857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.530641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.530672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.530682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.534619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.534652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.534663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.538345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.538376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.538387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.541996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.542024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.542035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.545758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.545788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.545798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.549460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.549489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.549499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.553134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.553163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.553174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.556794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.556834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.556845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.560482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.560511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.560521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.564115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.564145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.564156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.567775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.567805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.567829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.571465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.571494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.571505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.575182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.575213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.575223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.578874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.578902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.578913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.582542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.582572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.582582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.586182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.586212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.586223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.589883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.589911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.589921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.593552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.593580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.593591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.597194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.597224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.597234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.345 [2024-07-15 21:30:36.600864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.345 [2024-07-15 21:30:36.600892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.345 [2024-07-15 21:30:36.600903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.604449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.604478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.604488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.608122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.608152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.608163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.611784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.611814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.611837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.615445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.615475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.615485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.619089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.619119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.619130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.622730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.622760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.622771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.626338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.626367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.626378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.630012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.630042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.630053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.633677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.633706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.633716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.637474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.637504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.637515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.641221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.641251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.641262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.644945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.644973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.644983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.648709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.648739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.648750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.652462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.652494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.652505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.656189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.656220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.656231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.659927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.659955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.659965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.663685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.663714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.663724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.667539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.667570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.667580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.671357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.671388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.671400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.675273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.675304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.675315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.679169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.679199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.679210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.682940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.682967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.682978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.686849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.686879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.686890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.690700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.690729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.690740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.694618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.694649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.694661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.698519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.698551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.698563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.702321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.702352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.702363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.706117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.346 [2024-07-15 21:30:36.706147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.346 [2024-07-15 21:30:36.706157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.346 [2024-07-15 21:30:36.709893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.347 [2024-07-15 21:30:36.709922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.347 [2024-07-15 21:30:36.709932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.713569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.713599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.713610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.717281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.717313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.717324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.720980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.721008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.721019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.724642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.724670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.724680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.728319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.728348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.728359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.732017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.732046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.732057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.735653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.735682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.735693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.739331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.739362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.739372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.743011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.743041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.743052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.606 [2024-07-15 21:30:36.746802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.606 [2024-07-15 21:30:36.746840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.606 [2024-07-15 21:30:36.746851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.750653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.750682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.750692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.754341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.754371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.754381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.758046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.758077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.758088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.761714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.761743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.761753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.765364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.765393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.765403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.769017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.769046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.769057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.772667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.772694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.772705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.776319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.776350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.776360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.779989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.780018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.780029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.783649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.783678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.783688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.787323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.787353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.787364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.790995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.791023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.791033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.794647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.794675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.794686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.798272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.798302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.798313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.801923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.801951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.801961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.805585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.805613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.805624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.809262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.809291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.809302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.812946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.812973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.812985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.816587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.816623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.816634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.820217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.820247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.820258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.823909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.823937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.823948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.827586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.827614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.831269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.831300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.831311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.834917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.834946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.834957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.838563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.838593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.838603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.842201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.607 [2024-07-15 21:30:36.842231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.607 [2024-07-15 21:30:36.842241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.607 [2024-07-15 21:30:36.845886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.845914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.845925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.849542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.849571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.849581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.853181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.853211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.853222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.856791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.856831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.856842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.860485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.860515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.860525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.864171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.864200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.864210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.867851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.867879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.867889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.871515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.871543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.871554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.875160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.875190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.875201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.878804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.878844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.878854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.882470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.882499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.882509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.886130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.886159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.886170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.889788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.889829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.889840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.893442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.893470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.893481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.897094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.897123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.897133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.900740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.900769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.900779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.904396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.904425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.904436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.908024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.908053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.908064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.911714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.911744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.911754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.915392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.915423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.915434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.919049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.919080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.919091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.922732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.922761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.922772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.926439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.926471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.926482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.930331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.930363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.930375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.934114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.608 [2024-07-15 21:30:36.934145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.608 [2024-07-15 21:30:36.934157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.608 [2024-07-15 21:30:36.937964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.937991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.938002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.941731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.941761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.941784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.945601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.945631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.945642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.949514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.949545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.949556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.953294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.953325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.953336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.957036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.957066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.957077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.960931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.960959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.960969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.964637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.964663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.964674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.968298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.968328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.968339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.609 [2024-07-15 21:30:36.972118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.609 [2024-07-15 21:30:36.972147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.609 [2024-07-15 21:30:36.972158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:36.976242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:36.976296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:36.976315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:36.980580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:36.980633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:36.980649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:36.984402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:36.984439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:36.984450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:36.988254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:36.988290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:36.988301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:36.992019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:36.992049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:36.992061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:36.995788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:36.995839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:36.995853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:36.999512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:36.999544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:36.999556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.003252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.003283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.003294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.007151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.007184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.007195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.010836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.010866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.010877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.014500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.014531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.014542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.018343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.018374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.018385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.022052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.022085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.022096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.025706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.025739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.025750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.029365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.029396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.029406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.033236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.033267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.870 [2024-07-15 21:30:37.033279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.870 [2024-07-15 21:30:37.037110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.870 [2024-07-15 21:30:37.037140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.037151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.040898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.040927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.040937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.044620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.044647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.044657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.048278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.048308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.048318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.052035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.052065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.052076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.055678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.055708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.055719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.059389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.059419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.059430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.063096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.063127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.063138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.066790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.066832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.066850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.070565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.070596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.070607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.074319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.074349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.074360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.077976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.078005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.078015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.081578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.081609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.081620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.085255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.085286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.085297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.088936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.088965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.088976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.092594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.092633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.092644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.096239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.096272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.096282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.099913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.099941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.099952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.103833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.103869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.103886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.107588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.107621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.107632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.111312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.111345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.115067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.115098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.115109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.118751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.118782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.118794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.122453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.122483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.122494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.126094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.126124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.126135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.129859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.129888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.129898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.133625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.133654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.133665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.137359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.137390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.137401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.141059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.141090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.141101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.871 [2024-07-15 21:30:37.144779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.871 [2024-07-15 21:30:37.144808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.871 [2024-07-15 21:30:37.144828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.148450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.148479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.148490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.152140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.152170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.152180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.155832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.155859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.155869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.159458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.159486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.159497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.163097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.163131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.163143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.166742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.166771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.166782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.170403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.170433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.170444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.174068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.174097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.174107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.177716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.177746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.177756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.181386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.181415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.181426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.185046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.185076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.185086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.188658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.188685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.188695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.192327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.192357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.192368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.196012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.196039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.196049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.199636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.199665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.199675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.203312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.203343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.203353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.206985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.207013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.207023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.210619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.210649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.210659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.214296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.214326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.214336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.218011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.218040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.218050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.221654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.221684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.221695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.225309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.225339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.225349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.228893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.228922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.228933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.872 [2024-07-15 21:30:37.232595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:03.872 [2024-07-15 21:30:37.232633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.872 [2024-07-15 21:30:37.232643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.236395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.236424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.236435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.240112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.240141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.240151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.243776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.243806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.243827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.247408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.247438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.247449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.251050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.251080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.251090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.254791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.254831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.254842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.258451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.258481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.258491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.262230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.262261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.133 [2024-07-15 21:30:37.262272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.133 [2024-07-15 21:30:37.265974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.133 [2024-07-15 21:30:37.266002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.266013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.269789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.269830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.269842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.273543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.273572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.273583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.277890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.277919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.277930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.281612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.281642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.281653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.285304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.285334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.285344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.289025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.289055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.289066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.292745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.292774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.292784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.296450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.296480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.296491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.300332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.300374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.300384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.304146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.304175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.304186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.307864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.307892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.307903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.311528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.311557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.311567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.315193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.315222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.315233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.318916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.318944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.318955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.322617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.322646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.322656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.326322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.326352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.326362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.330052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.330082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.330092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.333689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.333718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.333728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.337321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.337351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.337362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.340971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.340999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.341009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.344659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.344685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.344696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.348334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.348364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.348375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.352015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.352045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.355647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.355678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.355688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.359438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.359470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.359481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.363364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.363395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.363406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.367285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.367316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.367326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.371132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.371162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.134 [2024-07-15 21:30:37.371173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.134 [2024-07-15 21:30:37.374941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.134 [2024-07-15 21:30:37.374970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.374981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.378631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.378660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.378672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.382283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.382327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.385963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.385991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.386002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.389579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.389608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.389619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.393207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.393237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.393248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.396873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.396900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.396910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.400494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.400524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.400534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.404203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.404234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.404244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.407886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.407916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.407926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.411543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.411573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.411584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.415205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.415235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.415245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.418911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.418940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.418950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.422575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.422605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.422615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.426292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.426323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.426334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.429945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.429975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.429986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.433584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.433614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.433625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.437272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.437302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.437313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.440928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.440957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.440968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.444542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.444570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.444581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.448216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.448246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.448257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.451887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.451914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.451925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.455538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.455568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.455579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.459234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.459265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.459276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.462881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.462909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.462920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.466495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.466524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.466535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.470199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.470230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.470240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.473875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.473904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.473915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.477595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.477625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.135 [2024-07-15 21:30:37.477635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.135 [2024-07-15 21:30:37.481284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.135 [2024-07-15 21:30:37.481313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.136 [2024-07-15 21:30:37.481324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.136 [2024-07-15 21:30:37.485003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.136 [2024-07-15 21:30:37.485033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.136 [2024-07-15 21:30:37.485043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.136 [2024-07-15 21:30:37.488716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.136 [2024-07-15 21:30:37.488744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.136 [2024-07-15 21:30:37.488754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.136 [2024-07-15 21:30:37.492392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.136 [2024-07-15 21:30:37.492422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.136 [2024-07-15 21:30:37.492432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.136 [2024-07-15 21:30:37.496099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.136 [2024-07-15 21:30:37.496129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.136 [2024-07-15 21:30:37.496140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.395 [2024-07-15 21:30:37.500086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.395 [2024-07-15 21:30:37.500129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.395 [2024-07-15 21:30:37.500162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.395 [2024-07-15 21:30:37.504063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.395 [2024-07-15 21:30:37.504098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.395 [2024-07-15 21:30:37.504110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.395 [2024-07-15 21:30:37.507945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.507975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.507987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.511805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.511845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.511856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.515719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.515749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.515760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.519496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.519527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.519537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.523299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.523329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.523339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.527036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.527067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.527077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.530796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.530837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.530847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.534770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.534802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.534813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.538537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.538567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.538578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.542236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.542268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.542279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.545906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.545934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.545945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.549531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.549561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.549572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.553210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.553240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.553251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.556921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.556950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.556960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.560800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.560840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.560853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.564666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.564695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.564706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.568477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.568507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.568518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.572229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.572260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.572270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.575913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.575941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.575952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.579605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.579635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.579646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.583315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.583345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.583356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.587007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.587036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.587047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.590655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.590685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.590696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.594316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.594347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.594357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.598036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.598065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.598076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.601670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.601698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.601709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.605323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.605353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.605364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.609057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.609086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.609097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.396 [2024-07-15 21:30:37.612698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.396 [2024-07-15 21:30:37.612726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.396 [2024-07-15 21:30:37.612736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.616342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.616371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.616382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.620020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.620049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.620059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.623640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.623669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.623680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.627323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.627352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.627363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.630975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.631003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.631013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.634612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.634641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.634652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.638297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.638328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.638339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.641917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.641945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.641955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.645535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.645564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.645575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.649162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.649192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.649202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.652855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.652882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.652892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.656508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.656537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.656547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.660182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.660212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.660223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.663862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.663890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.663901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.667502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.667531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.667541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.671142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.671172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.671184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.674793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.674835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.674846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.678462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.678492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.678502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.682106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.682136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.682147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.685772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.685802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.685813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.689450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.689480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.689490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.693122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.693151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.693162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.696769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.696798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.696808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.700442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.700471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.700481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.704079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.704108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.704119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.707715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.707744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.707755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.711474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.397 [2024-07-15 21:30:37.711503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.397 [2024-07-15 21:30:37.711514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.397 [2024-07-15 21:30:37.715141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.715170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.715181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.718916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.718944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.718954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.722630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.722658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.722668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.726457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.726489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.726500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.730359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.730389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.730399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.734154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.734184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.734195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.738026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.738055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.738065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.741697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.741726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.741737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.745386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.745415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.745426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.749114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.749143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.749154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.752775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.752804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.752815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.756475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.756504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.756515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.398 [2024-07-15 21:30:37.760171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.398 [2024-07-15 21:30:37.760201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.398 [2024-07-15 21:30:37.760211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.658 [2024-07-15 21:30:37.763909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.658 [2024-07-15 21:30:37.763937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.658 [2024-07-15 21:30:37.763948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.658 [2024-07-15 21:30:37.767597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.658 [2024-07-15 21:30:37.767626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.767636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.771402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.771432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.771443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.775174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.775204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.775215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.778869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.778897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.778907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.782612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.782644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.782654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.786392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.786423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.786434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.790138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.790168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.790179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.793906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.793934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.793945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.797563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.797593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.797604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.801199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.801229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.801240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.804875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.804903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.804913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.808529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.808558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.808569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.812182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.812213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.812224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.815848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.815876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.815886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.819583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.819612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.819622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.823275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.823305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.823315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.826953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.826981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.826992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.830597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.830627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.830637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.834311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.834340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.834351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.837957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.837985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.837996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.841588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.841618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.841628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.845268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.845298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.845309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.848962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.848990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.849000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.852637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.852664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.852675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.856344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.856374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.856384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.860016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.860045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.860056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.863700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.863730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.863740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.867409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.867439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.867450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.871231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.871262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.659 [2024-07-15 21:30:37.871273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.659 [2024-07-15 21:30:37.874947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.659 [2024-07-15 21:30:37.874974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.874985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.878665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.878693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.878704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.882332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.882362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.882372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.886020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.886048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.886059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.889702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.889731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.889741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.893353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.893381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.893392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.896983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.897010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.897021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.900594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.900631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.900642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.904281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.904310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.904321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.908006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.908035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.908045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.911657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.911686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.911696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.915315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.915344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.915355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.918981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.919020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.922672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.922700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.922711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.926350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.926380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.926391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.930020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.930049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.930060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.933657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.933686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.933696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.937329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.937358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.937369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.941010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.941038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.941049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.944680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.944708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.944718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.948437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.948467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.948478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.952226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.952257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.952267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.955898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.955927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.955938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.959621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.959652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.959662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.963260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.963293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.963304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.966895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.966923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.966933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.970543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.970574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.970584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.974349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.974381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.974391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.978148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.660 [2024-07-15 21:30:37.978181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.660 [2024-07-15 21:30:37.978192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.660 [2024-07-15 21:30:37.981933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:37.981963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:37.981974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:37.985840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:37.985887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:37.985898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:37.989861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:37.989891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:37.989903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:37.993697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:37.993729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:37.993740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:37.997557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:37.997593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:37.997606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:38.001381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:38.001413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:38.001424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:38.005063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:38.005093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:38.005104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:38.008697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:38.008726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:38.008737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:38.012306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:38.012336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:38.012347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:38.015887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:38.015916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:38.015927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:38.019475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:38.019504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:38.019514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.661 [2024-07-15 21:30:38.023109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.661 [2024-07-15 21:30:38.023139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.661 [2024-07-15 21:30:38.023150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.920 [2024-07-15 21:30:38.026759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.920 [2024-07-15 21:30:38.026789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.920 [2024-07-15 21:30:38.026800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.920 [2024-07-15 21:30:38.030360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.920 [2024-07-15 21:30:38.030389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.920 [2024-07-15 21:30:38.030400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.920 [2024-07-15 21:30:38.033973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.920 [2024-07-15 21:30:38.034001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.920 [2024-07-15 21:30:38.034011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.920 [2024-07-15 21:30:38.037540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.920 [2024-07-15 21:30:38.037569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.920 [2024-07-15 21:30:38.037580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.920 [2024-07-15 21:30:38.041146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.920 [2024-07-15 21:30:38.041175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.921 [2024-07-15 21:30:38.041185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.921 [2024-07-15 21:30:38.044747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.921 [2024-07-15 21:30:38.044775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.921 [2024-07-15 21:30:38.044786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.921 [2024-07-15 21:30:38.048348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.921 [2024-07-15 21:30:38.048378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.921 [2024-07-15 21:30:38.048389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.921 [2024-07-15 21:30:38.051952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9e4ac0) 00:17:04.921 [2024-07-15 21:30:38.051980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.921 [2024-07-15 21:30:38.051991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.921 00:17:04.921 Latency(us) 00:17:04.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.921 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:04.921 nvme0n1 : 2.00 8295.09 1036.89 0.00 0.00 1926.21 1723.94 9422.44 00:17:04.921 =================================================================================================================== 00:17:04.921 Total : 8295.09 1036.89 0.00 0.00 1926.21 1723.94 9422.44 00:17:04.921 0 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:04.921 | .driver_specific 00:17:04.921 | .nvme_error 00:17:04.921 | .status_code 00:17:04.921 | .command_transient_transport_error' 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 535 > 0 )) 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79850 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79850 ']' 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79850 00:17:04.921 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79850 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:05.179 killing process with pid 79850 00:17:05.179 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.179 00:17:05.179 Latency(us) 00:17:05.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.179 =================================================================================================================== 00:17:05.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79850' 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79850 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79850 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79904 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79904 /var/tmp/bperf.sock 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79904 ']' 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:05.179 21:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:05.179 [2024-07-15 21:30:38.546005] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:05.179 [2024-07-15 21:30:38.546084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79904 ] 00:17:05.437 [2024-07-15 21:30:38.692442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.437 [2024-07-15 21:30:38.780114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.693 [2024-07-15 21:30:38.821799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:06.260 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.260 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:06.260 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.260 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.260 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:06.260 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.260 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:06.519 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.519 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.519 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.519 nvme0n1 00:17:06.778 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:06.778 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.778 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:06.778 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.778 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:06.778 21:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:06.778 Running I/O for 2 seconds... 00:17:06.778 [2024-07-15 21:30:40.022339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fef90 00:17:06.778 [2024-07-15 21:30:40.024417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.024452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.035479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190feb58 00:17:06.778 [2024-07-15 21:30:40.037486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.037516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.047805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fe2e8 00:17:06.778 [2024-07-15 21:30:40.049725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.049755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.060432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fda78 00:17:06.778 [2024-07-15 21:30:40.062342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.062371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.072729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fd208 00:17:06.778 [2024-07-15 21:30:40.074624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.074653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.084998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fc998 00:17:06.778 [2024-07-15 21:30:40.086898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.086926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.097222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fc128 00:17:06.778 [2024-07-15 21:30:40.099144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.099172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.109816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fb8b8 00:17:06.778 [2024-07-15 21:30:40.111648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.111677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.122090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fb048 00:17:06.778 [2024-07-15 21:30:40.123903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.123930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:06.778 [2024-07-15 21:30:40.134231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fa7d8 00:17:06.778 [2024-07-15 21:30:40.136030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:06.778 [2024-07-15 21:30:40.136057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.146384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f9f68 00:17:07.038 [2024-07-15 21:30:40.148166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.148192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.158538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f96f8 00:17:07.038 [2024-07-15 21:30:40.160307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.160333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.170677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f8e88 00:17:07.038 [2024-07-15 21:30:40.172431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.172457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.182855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f8618 00:17:07.038 [2024-07-15 21:30:40.184581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.184615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.194997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f7da8 00:17:07.038 [2024-07-15 21:30:40.196716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.196742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.207181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f7538 00:17:07.038 [2024-07-15 21:30:40.208893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.208919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.219332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f6cc8 00:17:07.038 [2024-07-15 21:30:40.221032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.221057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.231487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f6458 00:17:07.038 [2024-07-15 21:30:40.233170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.233196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.243622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f5be8 00:17:07.038 [2024-07-15 21:30:40.245293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.245321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.255744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f5378 00:17:07.038 [2024-07-15 21:30:40.257396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.257423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.267884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f4b08 00:17:07.038 [2024-07-15 21:30:40.269515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.269542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.280025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f4298 00:17:07.038 [2024-07-15 21:30:40.281638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.281666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.292185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f3a28 00:17:07.038 [2024-07-15 21:30:40.293793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.293827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.304328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f31b8 00:17:07.038 [2024-07-15 21:30:40.305920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.305947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.316463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f2948 00:17:07.038 [2024-07-15 21:30:40.318042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.318069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.328603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f20d8 00:17:07.038 [2024-07-15 21:30:40.330172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.330201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.340744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f1868 00:17:07.038 [2024-07-15 21:30:40.342280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.342307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.352865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f0ff8 00:17:07.038 [2024-07-15 21:30:40.354383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.354410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.364987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f0788 00:17:07.038 [2024-07-15 21:30:40.366488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.038 [2024-07-15 21:30:40.366516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:07.038 [2024-07-15 21:30:40.377133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eff18 00:17:07.039 [2024-07-15 21:30:40.378619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.039 [2024-07-15 21:30:40.378647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:07.039 [2024-07-15 21:30:40.389742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ef6a8 00:17:07.039 [2024-07-15 21:30:40.391229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.039 [2024-07-15 21:30:40.391268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:07.039 [2024-07-15 21:30:40.402381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eee38 00:17:07.039 [2024-07-15 21:30:40.403854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.039 [2024-07-15 21:30:40.403885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.414567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ee5c8 00:17:07.298 [2024-07-15 21:30:40.416020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.416049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.426715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190edd58 00:17:07.298 [2024-07-15 21:30:40.428152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.428179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.438851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ed4e8 00:17:07.298 [2024-07-15 21:30:40.440262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.440290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.451007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ecc78 00:17:07.298 [2024-07-15 21:30:40.452404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.452432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.463163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ec408 00:17:07.298 [2024-07-15 21:30:40.464543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.464571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.475315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ebb98 00:17:07.298 [2024-07-15 21:30:40.476697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.476724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.487507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eb328 00:17:07.298 [2024-07-15 21:30:40.488889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.488920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.499674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eaab8 00:17:07.298 [2024-07-15 21:30:40.501042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.501072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.511892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ea248 00:17:07.298 [2024-07-15 21:30:40.513229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.513262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.524044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e99d8 00:17:07.298 [2024-07-15 21:30:40.525365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.525397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.536520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e9168 00:17:07.298 [2024-07-15 21:30:40.537862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.537894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.548710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e88f8 00:17:07.298 [2024-07-15 21:30:40.550002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.550033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.560877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e8088 00:17:07.298 [2024-07-15 21:30:40.562145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.562176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.573046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e7818 00:17:07.298 [2024-07-15 21:30:40.574302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.574333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.585217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e6fa8 00:17:07.298 [2024-07-15 21:30:40.586463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.586493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.597384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e6738 00:17:07.298 [2024-07-15 21:30:40.598612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.598643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.609569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e5ec8 00:17:07.298 [2024-07-15 21:30:40.610832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.610865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.622640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e5658 00:17:07.298 [2024-07-15 21:30:40.623876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.623909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.635034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e4de8 00:17:07.298 [2024-07-15 21:30:40.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.636250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.647191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e4578 00:17:07.298 [2024-07-15 21:30:40.648352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.648383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:07.298 [2024-07-15 21:30:40.659342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e3d08 00:17:07.298 [2024-07-15 21:30:40.660494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.298 [2024-07-15 21:30:40.660526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:07.558 [2024-07-15 21:30:40.671547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e3498 00:17:07.558 [2024-07-15 21:30:40.672692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.558 [2024-07-15 21:30:40.672724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:07.558 [2024-07-15 21:30:40.683745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e2c28 00:17:07.558 [2024-07-15 21:30:40.684890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.558 [2024-07-15 21:30:40.684921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:07.558 [2024-07-15 21:30:40.695928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e23b8 00:17:07.558 [2024-07-15 21:30:40.697042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.558 [2024-07-15 21:30:40.697072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.708091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e1b48 00:17:07.559 [2024-07-15 21:30:40.709186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.709216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.720227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e12d8 00:17:07.559 [2024-07-15 21:30:40.721305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.721337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.732388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e0a68 00:17:07.559 [2024-07-15 21:30:40.733468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.733499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.744552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e01f8 00:17:07.559 [2024-07-15 21:30:40.745606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.745633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.756738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190df988 00:17:07.559 [2024-07-15 21:30:40.757770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.757801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.768887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190df118 00:17:07.559 [2024-07-15 21:30:40.769898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.769929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.781014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190de8a8 00:17:07.559 [2024-07-15 21:30:40.782009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.782039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.793176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190de038 00:17:07.559 [2024-07-15 21:30:40.794158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.794190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.810409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190de038 00:17:07.559 [2024-07-15 21:30:40.812337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.812368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.822576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190de8a8 00:17:07.559 [2024-07-15 21:30:40.824490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.824518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.834739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190df118 00:17:07.559 [2024-07-15 21:30:40.836649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.836679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.846926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190df988 00:17:07.559 [2024-07-15 21:30:40.848807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.848844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.859076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e01f8 00:17:07.559 [2024-07-15 21:30:40.860952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.860981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.871243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e0a68 00:17:07.559 [2024-07-15 21:30:40.873103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.873134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.883404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e12d8 00:17:07.559 [2024-07-15 21:30:40.885251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.885279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.895563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e1b48 00:17:07.559 [2024-07-15 21:30:40.897393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.897424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.907746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e23b8 00:17:07.559 [2024-07-15 21:30:40.909560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.909592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:07.559 [2024-07-15 21:30:40.919902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e2c28 00:17:07.559 [2024-07-15 21:30:40.921700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.559 [2024-07-15 21:30:40.921730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:40.932079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e3498 00:17:07.832 [2024-07-15 21:30:40.933867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:40.933896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:40.944229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e3d08 00:17:07.832 [2024-07-15 21:30:40.946000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:40.946028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:40.956368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e4578 00:17:07.832 [2024-07-15 21:30:40.958117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:40.958146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:40.968497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e4de8 00:17:07.832 [2024-07-15 21:30:40.970241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:40.970269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:40.980665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e5658 00:17:07.832 [2024-07-15 21:30:40.982383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:40.982412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:40.992846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e5ec8 00:17:07.832 [2024-07-15 21:30:40.994537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:40.994567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:41.004965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e6738 00:17:07.832 [2024-07-15 21:30:41.006684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:41.006715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:41.017216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e6fa8 00:17:07.832 [2024-07-15 21:30:41.018888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:41.018919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:41.029429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e7818 00:17:07.832 [2024-07-15 21:30:41.031100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.832 [2024-07-15 21:30:41.031132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:07.832 [2024-07-15 21:30:41.041629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e8088 00:17:07.833 [2024-07-15 21:30:41.043276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.043306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.053840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e88f8 00:17:07.833 [2024-07-15 21:30:41.055461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.055493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.066163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e9168 00:17:07.833 [2024-07-15 21:30:41.067862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.067893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.078646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190e99d8 00:17:07.833 [2024-07-15 21:30:41.080319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.080348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.090949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ea248 00:17:07.833 [2024-07-15 21:30:41.092529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.092564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.103295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eaab8 00:17:07.833 [2024-07-15 21:30:41.104897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.104930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.115534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eb328 00:17:07.833 [2024-07-15 21:30:41.117160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.117192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.127942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ebb98 00:17:07.833 [2024-07-15 21:30:41.129545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.129577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.140212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ec408 00:17:07.833 [2024-07-15 21:30:41.141733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.141765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.152427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ecc78 00:17:07.833 [2024-07-15 21:30:41.153950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.153981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.164645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ed4e8 00:17:07.833 [2024-07-15 21:30:41.166158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.166187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.177005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190edd58 00:17:07.833 [2024-07-15 21:30:41.178475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.178506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:07.833 [2024-07-15 21:30:41.189222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ee5c8 00:17:07.833 [2024-07-15 21:30:41.190682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.833 [2024-07-15 21:30:41.190714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.201468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eee38 00:17:08.093 [2024-07-15 21:30:41.202924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.202954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.213760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ef6a8 00:17:08.093 [2024-07-15 21:30:41.215205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.215235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.226033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eff18 00:17:08.093 [2024-07-15 21:30:41.227447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.238221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f0788 00:17:08.093 [2024-07-15 21:30:41.239617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.239648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.250441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f0ff8 00:17:08.093 [2024-07-15 21:30:41.251838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.251867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.262655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f1868 00:17:08.093 [2024-07-15 21:30:41.264074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.264109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.274933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f20d8 00:17:08.093 [2024-07-15 21:30:41.276311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.276350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.287213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f2948 00:17:08.093 [2024-07-15 21:30:41.288566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.288602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.299527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f31b8 00:17:08.093 [2024-07-15 21:30:41.300886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.300921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.311849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f3a28 00:17:08.093 [2024-07-15 21:30:41.313188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.313225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.324148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f4298 00:17:08.093 [2024-07-15 21:30:41.325471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.325511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.336450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f4b08 00:17:08.093 [2024-07-15 21:30:41.337754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.337791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.348714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f5378 00:17:08.093 [2024-07-15 21:30:41.349994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.350029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.360952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f5be8 00:17:08.093 [2024-07-15 21:30:41.362208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.362243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.373678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f6458 00:17:08.093 [2024-07-15 21:30:41.374931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.374965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.386033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f6cc8 00:17:08.093 [2024-07-15 21:30:41.387285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.387319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.398472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f7538 00:17:08.093 [2024-07-15 21:30:41.399693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.399725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.410758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f7da8 00:17:08.093 [2024-07-15 21:30:41.411960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.411993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.423028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f8618 00:17:08.093 [2024-07-15 21:30:41.424206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.424239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.435254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f8e88 00:17:08.093 [2024-07-15 21:30:41.436430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.436464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.447495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f96f8 00:17:08.093 [2024-07-15 21:30:41.448652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.093 [2024-07-15 21:30:41.448692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:08.093 [2024-07-15 21:30:41.459829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f9f68 00:17:08.353 [2024-07-15 21:30:41.460964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.460996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.472142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fa7d8 00:17:08.353 [2024-07-15 21:30:41.473264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.473295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.484384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fb048 00:17:08.353 [2024-07-15 21:30:41.485510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.485543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.496728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fb8b8 00:17:08.353 [2024-07-15 21:30:41.497844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.497874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.509025] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fc128 00:17:08.353 [2024-07-15 21:30:41.510125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.510156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.521273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fc998 00:17:08.353 [2024-07-15 21:30:41.522326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.522358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.533475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fd208 00:17:08.353 [2024-07-15 21:30:41.534518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.534551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.545762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fda78 00:17:08.353 [2024-07-15 21:30:41.546790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.546827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.557974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fe2e8 00:17:08.353 [2024-07-15 21:30:41.558985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.559016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.570195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190feb58 00:17:08.353 [2024-07-15 21:30:41.571189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.571220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.587519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fef90 00:17:08.353 [2024-07-15 21:30:41.589475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.589506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.599729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190feb58 00:17:08.353 [2024-07-15 21:30:41.601761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.601791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.612078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fe2e8 00:17:08.353 [2024-07-15 21:30:41.614072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.614102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.624533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fda78 00:17:08.353 [2024-07-15 21:30:41.626617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.626648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.637247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fd208 00:17:08.353 [2024-07-15 21:30:41.639300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.639331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.650108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fc998 00:17:08.353 [2024-07-15 21:30:41.651982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.353 [2024-07-15 21:30:41.652012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:08.353 [2024-07-15 21:30:41.662439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fc128 00:17:08.353 [2024-07-15 21:30:41.664292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.354 [2024-07-15 21:30:41.664321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:08.354 [2024-07-15 21:30:41.674780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fb8b8 00:17:08.354 [2024-07-15 21:30:41.676643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.354 [2024-07-15 21:30:41.676684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:08.354 [2024-07-15 21:30:41.687113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fb048 00:17:08.354 [2024-07-15 21:30:41.688940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.354 [2024-07-15 21:30:41.688972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:08.354 [2024-07-15 21:30:41.699411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190fa7d8 00:17:08.354 [2024-07-15 21:30:41.701223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.354 [2024-07-15 21:30:41.701253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:08.354 [2024-07-15 21:30:41.711641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f9f68 00:17:08.354 [2024-07-15 21:30:41.713443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.354 [2024-07-15 21:30:41.713473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:08.613 [2024-07-15 21:30:41.723867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f96f8 00:17:08.613 [2024-07-15 21:30:41.725643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.613 [2024-07-15 21:30:41.725674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:08.613 [2024-07-15 21:30:41.736077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f8e88 00:17:08.614 [2024-07-15 21:30:41.737846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.737877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.748288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f8618 00:17:08.614 [2024-07-15 21:30:41.750035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.750065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.760518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f7da8 00:17:08.614 [2024-07-15 21:30:41.762257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.762286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.772692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f7538 00:17:08.614 [2024-07-15 21:30:41.774404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.774434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.784810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f6cc8 00:17:08.614 [2024-07-15 21:30:41.786518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.797006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f6458 00:17:08.614 [2024-07-15 21:30:41.798678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.798708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.809211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f5be8 00:17:08.614 [2024-07-15 21:30:41.810878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.810910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.821459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f5378 00:17:08.614 [2024-07-15 21:30:41.823133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.823164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.833689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f4b08 00:17:08.614 [2024-07-15 21:30:41.835348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.835379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.845913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f4298 00:17:08.614 [2024-07-15 21:30:41.847529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.847560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.858164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f3a28 00:17:08.614 [2024-07-15 21:30:41.859780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.859827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.870412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f31b8 00:17:08.614 [2024-07-15 21:30:41.872005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.872037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.882639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f2948 00:17:08.614 [2024-07-15 21:30:41.884356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.884498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.895181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f20d8 00:17:08.614 [2024-07-15 21:30:41.896760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.896798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.907481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f1868 00:17:08.614 [2024-07-15 21:30:41.909068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.909101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.919769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f0ff8 00:17:08.614 [2024-07-15 21:30:41.921322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.921355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.932018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190f0788 00:17:08.614 [2024-07-15 21:30:41.933542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.933577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.944249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eff18 00:17:08.614 [2024-07-15 21:30:41.945759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.945796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.956498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ef6a8 00:17:08.614 [2024-07-15 21:30:41.957995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.958027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.968731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190eee38 00:17:08.614 [2024-07-15 21:30:41.970212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.614 [2024-07-15 21:30:41.970246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:08.614 [2024-07-15 21:30:41.980944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190ee5c8 00:17:08.874 [2024-07-15 21:30:41.982404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.874 [2024-07-15 21:30:41.982437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.874 [2024-07-15 21:30:41.993174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861360) with pdu=0x2000190edd58 00:17:08.874 [2024-07-15 21:30:41.994617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.874 [2024-07-15 21:30:41.994651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:08.874 00:17:08.874 Latency(us) 00:17:08.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.874 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.874 nvme0n1 : 2.01 20556.30 80.30 0.00 0.00 6222.05 5737.69 24635.22 00:17:08.874 =================================================================================================================== 00:17:08.874 Total : 20556.30 80.30 0.00 0.00 6222.05 5737.69 24635.22 00:17:08.874 0 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:08.874 | .driver_specific 00:17:08.874 | .nvme_error 00:17:08.874 | .status_code 00:17:08.874 | .command_transient_transport_error' 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79904 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79904 ']' 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79904 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.874 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79904 00:17:09.132 killing process with pid 79904 00:17:09.132 Received shutdown signal, test time was about 2.000000 seconds 00:17:09.132 00:17:09.132 Latency(us) 00:17:09.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.132 =================================================================================================================== 00:17:09.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.132 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:09.132 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79904' 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79904 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79904 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79959 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79959 /var/tmp/bperf.sock 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 79959 ']' 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:09.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.133 21:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:09.391 [2024-07-15 21:30:42.504425] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:09.392 [2024-07-15 21:30:42.505751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79959 ] 00:17:09.392 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:09.392 Zero copy mechanism will not be used. 00:17:09.392 [2024-07-15 21:30:42.663397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.392 [2024-07-15 21:30:42.755669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.651 [2024-07-15 21:30:42.798034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.218 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.477 nvme0n1 00:17:10.477 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:10.477 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.477 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:10.736 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.736 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:10.736 21:30:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:10.736 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:10.736 Zero copy mechanism will not be used. 00:17:10.736 Running I/O for 2 seconds... 00:17:10.736 [2024-07-15 21:30:43.954725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.955118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.955147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.958411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.958493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.958518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.962389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.962451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.962477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.966050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.966108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.966132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.969951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.970037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.970060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.973692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.973778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.977512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.977651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.977676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.980951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.981208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.981237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.984515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.984626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.984658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.988360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.988425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.988447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.992114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.992192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.992221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.995839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.995919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.995941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:43.999558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:43.999664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:43.999690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.003293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.003405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.003428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.006983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.007087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.007109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.010682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.010744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.010767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.013960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.014298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.014328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.017677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.017753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.017775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.021388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.021442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.021465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.025646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.025723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.025751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.029848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.029995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.030029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.033385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.033443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.033466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.036596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.036751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.036775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.039718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.039952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.039979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.042790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.042909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.042930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.045865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.045928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.045950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.049388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.049451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.049474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.053136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.053300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.053322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.056452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.056711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.056734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.060001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.060067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.060089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.063800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.063864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.063886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.736 [2024-07-15 21:30:44.067533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.736 [2024-07-15 21:30:44.067592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.736 [2024-07-15 21:30:44.067613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.071232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.071283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.071305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.074854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.074912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.074934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.078611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.078666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.078687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.082367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.082450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.082471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.086010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.086150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.086171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.089290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.089547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.089573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.092776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.092854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.092875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.096513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.096570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.096592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.737 [2024-07-15 21:30:44.100194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.737 [2024-07-15 21:30:44.100250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.737 [2024-07-15 21:30:44.100272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.103950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.104053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.104073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.107751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.107807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.107840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.111575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.111710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.111731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.115344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.115415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.115436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.119095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.119146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.119166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.122497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.122841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.122866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.126116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.126190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.126211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.129814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.129878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.129898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.133463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.133521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.133541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.137171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.137230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.137251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.140929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.141008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.141029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.144664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.144828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.144849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.997 [2024-07-15 21:30:44.148383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.997 [2024-07-15 21:30:44.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.997 [2024-07-15 21:30:44.148507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.151744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.152000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.152026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.155538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.155599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.155620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.159483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.159568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.163387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.163446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.163467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.167258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.167315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.167337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.171120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.171179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.171200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.174924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.175020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.175041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.178890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.178963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.178984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.182700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.182763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.182784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.186196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.186524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.186550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.189943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.190017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.190038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.193766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.193839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.193860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.197670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.197727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.197747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.201608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.201674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.201694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.205533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.205591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.205611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.209513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.209582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.209603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.213578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.213722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.213743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.217271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.217525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.217546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.221188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.221246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.221266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.225527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.225610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.225630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.229751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.229847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.229868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.234141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.234201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.234224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.238342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.238408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.238429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.242712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.242793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.242815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.247080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.247153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.247175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.251354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.251423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.251445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.255533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.255608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.255628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.259691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.259754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.998 [2024-07-15 21:30:44.259775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.998 [2024-07-15 21:30:44.264021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.998 [2024-07-15 21:30:44.264116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.264136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.268138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.268255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.268275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.272385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.272507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.272527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.276285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.276553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.276574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.280288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.280348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.280368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.284464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.284526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.284548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.288579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.288664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.288692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.292759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.292822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.292858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.296914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.296982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.297002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.300805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.300920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.300941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.304659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.304735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.304761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.308646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.308727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.308753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.311879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.311934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.311955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.315893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.315967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.315987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.319802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.319873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.319893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.323748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.323886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.323915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.328024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.328128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.328151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.331468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.331663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.331685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.334835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.335030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.335052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.338164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.338305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.338326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.341441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.341521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.341542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.344745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.344943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.344965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.348006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.348219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.348239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.351267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.351324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.351345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.354588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.354653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.354673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.357781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.357856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.357877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.999 [2024-07-15 21:30:44.361098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:10.999 [2024-07-15 21:30:44.361182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.999 [2024-07-15 21:30:44.361204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.364398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.364458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.364478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.367835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.367891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.367912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.371218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.371339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.371360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.374522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.374621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.374642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.377979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.378077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.378098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.381165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.381224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.381245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.384518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.384600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.384664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.388289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.388461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.388481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.391891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.391952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.391973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.395518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.260 [2024-07-15 21:30:44.395616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.260 [2024-07-15 21:30:44.395637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.260 [2024-07-15 21:30:44.399028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.399137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.399161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.402582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.402647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.402668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.406008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.406181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.406202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.409245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.409360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.409380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.412567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.412809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.412842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.415828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.415896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.415917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.419204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.419399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.419420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.422455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.422586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.422607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.425804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.425957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.425986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.429130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.429253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.429276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.432634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.432899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.432924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.436004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.436075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.436097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.439360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.439521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.439550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.442622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.442844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.442873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.446144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.446253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.446282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.449558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.449794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.449821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.453103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.453341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.453370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.456570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.456851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.456880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.459952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.460081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.460109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.463407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.463464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.463488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.466787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.466858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.466881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.470090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.470197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.470218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.473311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.473372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.473394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.476687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.476806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.476842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.480050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.480222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.480242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.483384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.483465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.483486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.486769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.486838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.486859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.490099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.490156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.490177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.493477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.493548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.493569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.496833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.496974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.496996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.500067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.500125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.500145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.503478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.503551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.261 [2024-07-15 21:30:44.503572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.261 [2024-07-15 21:30:44.506863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.261 [2024-07-15 21:30:44.507060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.507085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.510125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.510193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.510214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.513574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.513680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.513706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.516922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.517102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.517127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.520255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.520487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.520517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.523887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.523943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.523969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.527829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.527967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.527994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.531886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.531957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.531988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.535286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.535352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.535374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.539443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.539537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.539567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.543362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.543417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.543438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.547171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.547226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.547248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.551119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.551225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.551253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.555512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.555572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.555602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.559541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.559769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.559800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.563747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.563960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.563999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.569632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.570021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.570083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.574889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.574989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.575013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.578744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.578836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.578860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.582629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.582984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.583016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.586581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.586663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.586685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.590435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.590513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.594427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.594485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.594506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.598444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.598503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.598525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.602484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.602567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.602588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.606486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.606545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.606566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.610541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.610600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.610621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.614506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.614691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.614712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.618329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.618608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.262 [2024-07-15 21:30:44.618630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.262 [2024-07-15 21:30:44.622619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.262 [2024-07-15 21:30:44.622696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.263 [2024-07-15 21:30:44.622715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.626876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.626930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.626967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.631216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.631410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.631440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.635620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.635714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.635736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.639847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.639923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.639945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.644329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.644503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.644532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.648437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.648514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.648539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.652825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.652945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.652974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.656891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.657074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.657100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.661177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.661324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.661348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.664912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.665304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.665342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.668884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.668943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.668970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.672810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.523 [2024-07-15 21:30:44.672890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.523 [2024-07-15 21:30:44.672914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.523 [2024-07-15 21:30:44.676743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.676806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.676840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.680679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.680767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.680795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.685047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.685121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.685146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.689623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.689686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.689710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.694129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.694207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.694236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.698629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.698867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.698898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.702991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.703153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.703183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.706764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.706858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.706886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.710628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.710744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.710775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.714148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.714206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.714230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.717645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.717752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.717779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.720941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.721010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.721032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.724332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.724425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.724446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.727632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.727796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.727831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.730992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.731072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.731093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.734315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.734411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.734431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.737581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.737789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.737810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.740802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.740902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.740922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.744144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.744214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.744234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.747490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.747655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.747676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.750809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.750898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.750919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.754095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.754162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.754182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.757483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.757580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.757601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.760776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.760974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.760995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.764012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.764096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.764116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.767359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.767421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.767441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.770761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.770828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.770849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.774359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.774446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.524 [2024-07-15 21:30:44.774467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.524 [2024-07-15 21:30:44.778069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.524 [2024-07-15 21:30:44.778186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.778206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.781482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.781646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.781667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.784797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.784865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.784886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.788001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.788063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.788083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.791305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.791383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.791403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.794621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.794677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.794697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.797886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.798118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.798137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.801080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.801161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.801182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.804412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.804588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.804619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.807648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.807701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.807722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.811075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.811205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.811226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.814348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.814406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.814425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.817675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.817787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.817807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.820953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.821119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.821140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.824203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.824263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.824282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.827510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.827606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.827628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.830888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.831096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.831116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.834171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.834269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.834289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.837460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.837622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.837642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.840724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.840791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.840834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.844091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.844159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.847399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.847511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.847531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.850655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.850736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.850757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.854022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.854108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.854130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.857400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.857579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.857601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.860625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.860723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.860751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.864013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.864075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.864096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.867301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.867360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.867380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.870633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.870742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.525 [2024-07-15 21:30:44.870763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.525 [2024-07-15 21:30:44.873949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.525 [2024-07-15 21:30:44.874082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.526 [2024-07-15 21:30:44.874103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.526 [2024-07-15 21:30:44.877233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.526 [2024-07-15 21:30:44.877311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.526 [2024-07-15 21:30:44.877332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.526 [2024-07-15 21:30:44.880523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.526 [2024-07-15 21:30:44.880628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.526 [2024-07-15 21:30:44.880655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.526 [2024-07-15 21:30:44.883844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.526 [2024-07-15 21:30:44.884047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.526 [2024-07-15 21:30:44.884067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.526 [2024-07-15 21:30:44.887050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.526 [2024-07-15 21:30:44.887101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.526 [2024-07-15 21:30:44.887121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.890399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.890560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.890580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.893645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.893696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.893718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.896952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.897008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.897029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.900288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.900365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.900385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.903592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.903667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.903688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.907027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.907194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.907216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.910373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.910468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.910489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.913782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.913898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.913919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.916987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.917061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.917082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.920294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.920357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.923598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.923761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.923781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.926881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.926959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.926980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.930194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.930294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.930315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.933550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.933738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.933758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.936836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.936973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.936993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.940122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.940201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.940221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.943379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.943428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.943449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.946788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.946891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.950211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.950324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.953419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.953477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.953498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.956855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.957016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.957037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.960174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.960346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.960365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.963441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.963513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.963533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.966756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.966868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.966889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.970092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.970270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.970290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.973351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.973438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.973458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.976677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.976777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.976803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.979968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.980150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.980169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.983232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.983320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.983341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.986597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.986673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.986694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.989756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.989830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.989851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.993078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.993135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.993155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.787 [2024-07-15 21:30:44.996463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.787 [2024-07-15 21:30:44.996559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.787 [2024-07-15 21:30:44.996580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:44.999678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:44.999765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:44.999786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.002998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.003091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.003113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.006423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.006486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.006508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.009781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.009849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.009871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.013146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.013257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.013278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.016412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.016554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.016574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.019692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.019863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.019884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.023004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.023060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.023082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.026360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.026430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.026451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.029658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.029854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.029876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.032917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.033020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.033041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.036271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.036406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.036426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.039647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.039698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.039719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.042951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.043091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.043111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.046153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.046214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.046233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.049429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.049542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.049563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.052779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.052886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.052907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.056015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.056076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.056099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.059364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.059434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.059456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.062684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.062763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.062783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.065965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.066036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.066057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.069252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.069328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.069348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.072381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.072509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.072529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.075712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.075776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.075797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.079026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.079210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.079231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.082328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.082378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.082398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.085640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.085692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.085713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.088924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.089109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.089129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.092133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.092198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.092219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.095511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.095580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.095601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.098718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.098769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.098789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.102005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.102097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.102117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.105291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.105443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.105463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.108472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.108549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.108569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.111845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.111947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.111968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.115260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.788 [2024-07-15 21:30:45.115349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.788 [2024-07-15 21:30:45.115369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.788 [2024-07-15 21:30:45.118584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.118701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.118722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.122030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.122087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.122108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.125587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.125649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.125670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.129450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.129514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.129535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.133108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.133354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.133376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.136459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.136511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.136531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.140011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.140141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.140161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.143452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.143628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.143649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.146891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.147007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.147028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.789 [2024-07-15 21:30:45.150309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:11.789 [2024-07-15 21:30:45.150362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.789 [2024-07-15 21:30:45.150383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.154077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.154129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.154149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.157633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.157725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.157753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.161165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.161329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.161350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.164691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.164782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.164811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.168329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.168382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.168402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.171904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.171972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.171991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.175480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.175574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.175595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.178735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.178804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.178838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.182064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.182128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.182148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.185331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.185400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.185420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.188734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.188802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.188836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.192081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.192191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.192211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.195296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.195348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.195368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.198633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.198695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.198716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.201889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.201965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.201985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.050 [2024-07-15 21:30:45.205112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.050 [2024-07-15 21:30:45.205185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.050 [2024-07-15 21:30:45.205206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.208405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.208513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.208533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.211617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.211669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.211690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.214966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.215085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.215106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.218475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.218685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.218705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.221751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.221813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.221845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.225164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.225249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.225269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.228490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.228617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.228644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.231792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.231932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.231953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.235119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.235301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.235321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.238359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.238426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.238447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.241673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.241844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.241863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.244963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.245036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.245057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.248201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.248301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.248321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.251430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.251611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.251630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.254661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.254733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.254753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.257943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.258004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.258025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.261268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.261329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.261350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.264676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.264848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.264869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.268105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.268159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.268180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.271439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.271502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.271523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.274925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.275034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.275054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.278251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.278316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.278336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.281770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.281825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.281857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.285086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.285148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.285168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.288578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.288659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.288685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.292212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.292295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.292316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.295732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.295909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.295930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.299251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.299329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.051 [2024-07-15 21:30:45.299350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.051 [2024-07-15 21:30:45.302801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.051 [2024-07-15 21:30:45.302922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.302942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.306151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.306206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.306226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.309547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.309660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.309681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.312876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.313055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.313074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.316143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.316204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.316224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.319477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.319607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.319628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.322831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.322991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.323010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.326135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.326211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.326231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.329661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.329744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.329764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.333164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.333359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.333380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.336478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.336597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.336635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.339865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.340050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.340069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.343142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.343195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.343220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.346487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.346567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.346588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.349770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.349939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.349960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.353013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.353110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.353130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.356353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.356571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.356590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.359581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.359639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.359659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.362950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.363072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.363093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.366182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.366233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.366254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.369545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.369689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.369709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.372950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.373135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.373156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.376264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.376446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.376466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.379507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.379558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.379578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.382803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.382874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.386146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.386276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.386297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.389453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.389699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.389719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.392644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.392756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.392783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.395907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.396066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.396085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.052 [2024-07-15 21:30:45.399099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.052 [2024-07-15 21:30:45.399163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.052 [2024-07-15 21:30:45.399183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.053 [2024-07-15 21:30:45.402359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.053 [2024-07-15 21:30:45.402540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.053 [2024-07-15 21:30:45.402559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.053 [2024-07-15 21:30:45.405582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.053 [2024-07-15 21:30:45.405641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.053 [2024-07-15 21:30:45.405661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.053 [2024-07-15 21:30:45.408917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.053 [2024-07-15 21:30:45.408984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.053 [2024-07-15 21:30:45.409005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.053 [2024-07-15 21:30:45.412213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.053 [2024-07-15 21:30:45.412339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.053 [2024-07-15 21:30:45.412359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.053 [2024-07-15 21:30:45.415527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.053 [2024-07-15 21:30:45.415691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.053 [2024-07-15 21:30:45.415711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.314 [2024-07-15 21:30:45.418910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.314 [2024-07-15 21:30:45.419117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.314 [2024-07-15 21:30:45.419137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.314 [2024-07-15 21:30:45.422121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.314 [2024-07-15 21:30:45.422172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.314 [2024-07-15 21:30:45.422192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.314 [2024-07-15 21:30:45.425424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.425566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.425586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.428678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.428911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.428931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.431905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.431981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.432001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.435225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.435382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.435402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.438485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.438564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.438584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.441763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.441870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.441891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.445063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.445319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.445339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.448229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.448338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.448357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.451507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.451665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.451685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.454768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.454868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.454888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.458037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.458146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.458167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.461226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.461281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.461301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.464530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.464596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.464631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.468462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.468590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.468628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.471834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.471998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.472019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.475116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.475207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.475231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.478582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.478787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.478809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.482027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.482106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.482129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.485467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.485665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.485691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.488827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.488921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.488950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.492039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.492138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.492165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.495338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.495452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.495480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.498847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.498934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.498964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.502189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.502426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.502463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.505465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.505648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.505675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.508990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.509048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.509071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.512424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.512526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.512547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.515754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.515924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.515946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.518985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.519037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.519059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.315 [2024-07-15 21:30:45.522349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.315 [2024-07-15 21:30:45.522422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.315 [2024-07-15 21:30:45.522443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.525894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.525947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.525969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.529327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.529481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.529501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.532503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.532634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.532663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.535743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.535797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.535834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.539114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.539185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.539206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.542397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.542576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.542597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.545671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.545734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.545755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.549063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.549137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.549159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.552269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.552322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.552342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.555671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.555768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.555790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.558972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.559136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.559156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.562198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.562312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.562333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.565521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.565586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.565608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.568858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.568987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.569008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.572038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.572115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.572135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.575367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.575457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.575478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.578776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.578968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.578989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.582022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.582117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.582139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.585381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.585465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.585486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.588548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.588619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.588647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.591920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.591988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.592009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.595226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.595350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.595371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.598492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.598659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.598679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.601780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.601876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.601897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.605065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.605169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.605190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.608274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.608331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.608351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.611668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.611769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.611789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.614985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.615147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.615167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.618241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.316 [2024-07-15 21:30:45.618300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.316 [2024-07-15 21:30:45.618321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.316 [2024-07-15 21:30:45.621570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.621659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.621680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.624880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.625082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.625102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.628131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.628186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.628206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.631418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.631584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.631604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.634721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.634774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.634795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.637996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.638060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.638082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.641175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.641376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.641396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.644362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.644423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.644443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.647723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.647781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.647802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.650978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.651056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.651077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.654308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.654447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.654467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.657792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.658011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.658032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.661256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.661318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.661339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.664639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.664736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.664764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.668099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.668280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.668300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.671480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.671555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.671576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.674970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.675115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.675136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.317 [2024-07-15 21:30:45.678427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.317 [2024-07-15 21:30:45.678601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.317 [2024-07-15 21:30:45.678622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.577 [2024-07-15 21:30:45.681847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.577 [2024-07-15 21:30:45.681906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.577 [2024-07-15 21:30:45.681927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.577 [2024-07-15 21:30:45.685139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.577 [2024-07-15 21:30:45.685256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.577 [2024-07-15 21:30:45.685277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.577 [2024-07-15 21:30:45.688353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.577 [2024-07-15 21:30:45.688402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.577 [2024-07-15 21:30:45.688422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.577 [2024-07-15 21:30:45.691871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.577 [2024-07-15 21:30:45.691924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.577 [2024-07-15 21:30:45.691945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.577 [2024-07-15 21:30:45.695379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.577 [2024-07-15 21:30:45.695629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.577 [2024-07-15 21:30:45.695649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.577 [2024-07-15 21:30:45.698844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.577 [2024-07-15 21:30:45.698906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.577 [2024-07-15 21:30:45.698928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.577 [2024-07-15 21:30:45.702472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.577 [2024-07-15 21:30:45.702593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.577 [2024-07-15 21:30:45.702615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.706133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.706291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.706319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.709786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.709990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.710011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.713438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.713652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.713673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.717105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.717164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.717185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.720886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.721068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.721099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.724623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.724812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.724850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.728247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.728322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.728346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.732088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.732223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.732246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.736053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.736227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.736258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.739855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.740031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.740060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.743507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.743581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.743603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.747641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.747711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.747732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.751733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.751850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.751873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.755700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.755849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.759258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.759508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.759529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.762996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.763053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.763074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.767071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.767130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.767151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.771162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.771220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.771242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.775128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.775179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.775201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.778906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.778972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.778994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.782685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.782751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.782772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.786580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.786649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.786671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.790455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.790512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.790534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.794351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.794404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.794425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.797805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.798143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.798164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.801542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.801616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.801637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.805379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.805433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.805454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.809165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.809223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.809244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.578 [2024-07-15 21:30:45.812991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.578 [2024-07-15 21:30:45.813045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.578 [2024-07-15 21:30:45.813066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.816848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.816925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.816945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.820725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.820815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.820849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.824494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.824649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.824675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.827996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.828247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.828268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.831783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.831851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.831872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.835687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.835740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.835761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.839499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.839554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.839575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.843583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.843635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.843656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.847513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.847567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.847589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.851446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.851507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.851528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.855339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.855417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.855438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.859227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.859281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.859302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.862775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.863102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.863128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.866573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.866668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.870517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.870576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.870597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.874363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.874422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.874443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.878276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.878335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.878356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.882191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.882245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.882266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.886122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.886235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.886256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.889905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.890028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.890049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.893404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.893634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.893656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.897095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.897151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.897171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.900912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.900963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.900985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.904721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.904775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.904802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.908561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.908646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.908673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.912354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.912414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.912435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.916187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.916246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.916267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.919965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.920043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.920064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.923790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.579 [2024-07-15 21:30:45.923858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.579 [2024-07-15 21:30:45.923879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.579 [2024-07-15 21:30:45.927251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.580 [2024-07-15 21:30:45.927581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.580 [2024-07-15 21:30:45.927608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.580 [2024-07-15 21:30:45.930942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.580 [2024-07-15 21:30:45.931025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.580 [2024-07-15 21:30:45.931046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.580 [2024-07-15 21:30:45.934776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.580 [2024-07-15 21:30:45.934838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.580 [2024-07-15 21:30:45.934859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.580 [2024-07-15 21:30:45.938686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8616a0) with pdu=0x2000190fef90 00:17:12.580 [2024-07-15 21:30:45.938763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.580 [2024-07-15 21:30:45.938784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.580 00:17:12.580 Latency(us) 00:17:12.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.580 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:12.580 nvme0n1 : 2.00 8700.31 1087.54 0.00 0.00 1835.50 1223.87 10159.40 00:17:12.580 =================================================================================================================== 00:17:12.580 Total : 8700.31 1087.54 0.00 0.00 1835.50 1223.87 10159.40 00:17:12.580 0 00:17:12.839 21:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:12.839 21:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:12.839 21:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:12.839 21:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:12.839 | .driver_specific 00:17:12.839 | .nvme_error 00:17:12.839 | .status_code 00:17:12.839 | .command_transient_transport_error' 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 561 > 0 )) 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79959 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79959 ']' 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79959 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79959 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:12.839 killing process with pid 79959 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79959' 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79959 00:17:12.839 Received shutdown signal, test time was about 2.000000 seconds 00:17:12.839 00:17:12.839 Latency(us) 00:17:12.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.839 =================================================================================================================== 00:17:12.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.839 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79959 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79757 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 79757 ']' 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 79757 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79757 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.097 killing process with pid 79757 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79757' 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 79757 00:17:13.097 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 79757 00:17:13.356 00:17:13.356 real 0m17.346s 00:17:13.356 user 0m31.864s 00:17:13.356 sys 0m5.400s 00:17:13.356 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.356 21:30:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:13.356 ************************************ 00:17:13.356 END TEST nvmf_digest_error 00:17:13.356 ************************************ 00:17:13.356 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:13.356 21:30:46 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:13.356 21:30:46 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:13.356 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.356 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.614 rmmod nvme_tcp 00:17:13.614 rmmod nvme_fabrics 00:17:13.614 rmmod nvme_keyring 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79757 ']' 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79757 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 79757 ']' 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 79757 00:17:13.614 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (79757) - No such process 00:17:13.614 Process with pid 79757 is not found 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 79757 is not found' 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.614 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.615 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.615 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.615 21:30:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:13.615 00:17:13.615 real 0m36.003s 00:17:13.615 user 1m4.717s 00:17:13.615 sys 0m11.397s 00:17:13.615 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.615 21:30:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:13.615 ************************************ 00:17:13.615 END TEST nvmf_digest 00:17:13.615 ************************************ 00:17:13.615 21:30:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:13.615 21:30:46 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:17:13.615 21:30:46 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:17:13.615 21:30:46 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:13.615 21:30:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.615 21:30:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.615 21:30:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.615 ************************************ 00:17:13.615 START TEST nvmf_host_multipath 00:17:13.615 ************************************ 00:17:13.615 21:30:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:13.874 * Looking for test storage... 00:17:13.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.874 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:13.875 Cannot find device "nvmf_tgt_br" 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.875 Cannot find device "nvmf_tgt_br2" 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:13.875 Cannot find device "nvmf_tgt_br" 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:13.875 Cannot find device "nvmf_tgt_br2" 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:13.875 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:14.132 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:14.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:17:14.390 00:17:14.390 --- 10.0.0.2 ping statistics --- 00:17:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.390 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:14.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:17:14.390 00:17:14.390 --- 10.0.0.3 ping statistics --- 00:17:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.390 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:17:14.390 00:17:14.390 --- 10.0.0.1 ping statistics --- 00:17:14.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.390 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.390 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80223 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80223 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80223 ']' 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.391 21:30:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:14.391 [2024-07-15 21:30:47.747161] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:17:14.391 [2024-07-15 21:30:47.747214] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.649 [2024-07-15 21:30:47.890925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:14.649 [2024-07-15 21:30:47.986446] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.649 [2024-07-15 21:30:47.986496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.649 [2024-07-15 21:30:47.986506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.649 [2024-07-15 21:30:47.986514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.649 [2024-07-15 21:30:47.986520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.649 [2024-07-15 21:30:47.986717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.649 [2024-07-15 21:30:47.986719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.909 [2024-07-15 21:30:48.027559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80223 00:17:15.479 21:30:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:15.738 [2024-07-15 21:30:48.846982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.738 21:30:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:15.738 Malloc0 00:17:15.738 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:15.998 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:16.256 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.516 [2024-07-15 21:30:49.663776] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:16.516 [2024-07-15 21:30:49.855586] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80279 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80279 /var/tmp/bdevperf.sock 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 80279 ']' 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.516 21:30:49 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:17.453 21:30:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.453 21:30:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:17.453 21:30:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:17.712 21:30:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:17.971 Nvme0n1 00:17:17.971 21:30:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:18.230 Nvme0n1 00:17:18.230 21:30:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:18.230 21:30:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:19.166 21:30:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:19.166 21:30:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:19.425 21:30:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:19.683 21:30:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:19.683 21:30:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80313 00:17:19.683 21:30:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:19.683 21:30:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80223 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:26.279 21:30:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:26.279 21:30:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:26.279 Attaching 4 probes... 00:17:26.279 @path[10.0.0.2, 4421]: 22731 00:17:26.279 @path[10.0.0.2, 4421]: 22879 00:17:26.279 @path[10.0.0.2, 4421]: 22847 00:17:26.279 @path[10.0.0.2, 4421]: 22899 00:17:26.279 @path[10.0.0.2, 4421]: 21894 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80313 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:26.279 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:26.280 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:26.280 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80431 00:17:26.280 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:26.280 21:30:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80223 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:32.844 Attaching 4 probes... 00:17:32.844 @path[10.0.0.2, 4420]: 23064 00:17:32.844 @path[10.0.0.2, 4420]: 23408 00:17:32.844 @path[10.0.0.2, 4420]: 23284 00:17:32.844 @path[10.0.0.2, 4420]: 23312 00:17:32.844 @path[10.0.0.2, 4420]: 23192 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80431 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:32.844 21:31:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:32.844 21:31:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:33.103 21:31:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:33.103 21:31:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80223 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:33.103 21:31:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80543 00:17:33.103 21:31:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:39.744 Attaching 4 probes... 00:17:39.744 @path[10.0.0.2, 4421]: 17218 00:17:39.744 @path[10.0.0.2, 4421]: 22487 00:17:39.744 @path[10.0.0.2, 4421]: 22776 00:17:39.744 @path[10.0.0.2, 4421]: 22614 00:17:39.744 @path[10.0.0.2, 4421]: 22516 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80543 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80661 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80223 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:39.744 21:31:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:46.344 21:31:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:46.344 21:31:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:46.344 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:46.344 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:46.344 Attaching 4 probes... 00:17:46.344 00:17:46.344 00:17:46.344 00:17:46.345 00:17:46.345 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80661 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80772 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:46.345 21:31:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80223 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:52.905 Attaching 4 probes... 00:17:52.905 @path[10.0.0.2, 4421]: 22119 00:17:52.905 @path[10.0.0.2, 4421]: 22713 00:17:52.905 @path[10.0.0.2, 4421]: 22814 00:17:52.905 @path[10.0.0.2, 4421]: 22640 00:17:52.905 @path[10.0.0.2, 4421]: 22816 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80772 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:52.905 21:31:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:53.837 21:31:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:53.837 21:31:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80897 00:17:53.837 21:31:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80223 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:53.837 21:31:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.396 Attaching 4 probes... 00:18:00.396 @path[10.0.0.2, 4420]: 22376 00:18:00.396 @path[10.0.0.2, 4420]: 22599 00:18:00.396 @path[10.0.0.2, 4420]: 20816 00:18:00.396 @path[10.0.0.2, 4420]: 22283 00:18:00.396 @path[10.0.0.2, 4420]: 22090 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80897 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:00.396 [2024-07-15 21:31:33.392512] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:00.396 21:31:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:07.021 21:31:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:07.021 21:31:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81071 00:18:07.021 21:31:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:07.021 21:31:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80223 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:13.590 Attaching 4 probes... 00:18:13.590 @path[10.0.0.2, 4421]: 22383 00:18:13.590 @path[10.0.0.2, 4421]: 22514 00:18:13.590 @path[10.0.0.2, 4421]: 22554 00:18:13.590 @path[10.0.0.2, 4421]: 22510 00:18:13.590 @path[10.0.0.2, 4421]: 22665 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81071 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80279 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80279 ']' 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80279 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80279 00:18:13.590 killing process with pid 80279 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80279' 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80279 00:18:13.590 21:31:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80279 00:18:13.590 Connection closed with partial response: 00:18:13.590 00:18:13.590 00:18:13.590 21:31:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80279 00:18:13.590 21:31:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:13.590 [2024-07-15 21:30:49.923515] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:13.590 [2024-07-15 21:30:49.923628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80279 ] 00:18:13.590 [2024-07-15 21:30:50.064956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.590 [2024-07-15 21:30:50.159308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.590 [2024-07-15 21:30:50.201323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.590 Running I/O for 90 seconds... 00:18:13.590 [2024-07-15 21:30:59.583446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.583772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.583802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.583843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.583891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.583922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.583952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.583970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.583982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.584012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.590 [2024-07-15 21:30:59.584043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.590 [2024-07-15 21:30:59.584381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.590 [2024-07-15 21:30:59.584399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.584829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.584971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.584983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.585013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.585043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.591 [2024-07-15 21:30:59.585088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.591 [2024-07-15 21:30:59.585695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.591 [2024-07-15 21:30:59.585707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.585737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.585768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.585798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.585845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.585888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.585919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.585949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.585980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.585998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.592 [2024-07-15 21:30:59.586862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.586973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.586985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.587007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.587020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.592 [2024-07-15 21:30:59.587038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.592 [2024-07-15 21:30:59.587050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.587313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.587325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:30:59.588534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:30:59.588804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:30:59.588829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.023715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.023784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.023856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.023871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.023889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.023902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.023920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.023932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.023986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.024016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.024045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.593 [2024-07-15 21:31:06.024075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.593 [2024-07-15 21:31:06.024550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.593 [2024-07-15 21:31:06.024562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.024830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.024860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.024892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.024923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.024953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.024970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.024983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.594 [2024-07-15 21:31:06.025564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.594 [2024-07-15 21:31:06.025612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.594 [2024-07-15 21:31:06.025624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.025654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.025684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.025714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.025744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.025774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.025805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.025862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.025893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.025995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.595 [2024-07-15 21:31:06.026902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.026980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.026993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.595 [2024-07-15 21:31:06.027011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.595 [2024-07-15 21:31:06.027024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:06.027054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:06.027084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:06.027119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:06.027726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.027766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.027802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.027853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.027896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.027933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.027969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.027993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:06.028656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:06.028670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.596 [2024-07-15 21:31:12.847582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:12.847612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:12.847641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:12.847670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:12.847716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:12.847746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:13.596 [2024-07-15 21:31:12.847764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.596 [2024-07-15 21:31:12.847776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.847794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.847807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.847833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.847846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.847864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.847876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.847895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.847908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.847926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.847938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.847955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.847967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.847985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.847997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.597 [2024-07-15 21:31:12.848850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.848977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.848994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.849006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.849024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.849036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.849054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.849066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.597 [2024-07-15 21:31:12.849083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.597 [2024-07-15 21:31:12.849096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.598 [2024-07-15 21:31:12.849849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.849968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.849986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.850020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.850050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.850080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.850110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.850140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.850170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.598 [2024-07-15 21:31:12.850200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.598 [2024-07-15 21:31:12.850213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.850577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.850607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.850637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.850667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.850697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.850727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.850761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.850779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.850791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.599 [2024-07-15 21:31:12.851349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:12.851958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:12.851972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:25.972429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:25.972491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:25.972537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:25.972552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:25.972570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:25.972583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:25.972600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:25.972612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:13.599 [2024-07-15 21:31:25.972638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.599 [2024-07-15 21:31:25.972650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.972702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.972762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.972792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.972833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.972863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.972893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.972923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.972953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.972970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.972982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.600 [2024-07-15 21:31:25.973717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.600 [2024-07-15 21:31:25.973963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.600 [2024-07-15 21:31:25.973976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.973989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.974711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.974977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.974991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.975002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.975016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.975028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.975041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.975053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.975066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.975083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.975096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.975108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.975122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:13.601 [2024-07-15 21:31:25.975134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.975148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.975160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.601 [2024-07-15 21:31:25.975173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.601 [2024-07-15 21:31:25.975185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.602 [2024-07-15 21:31:25.975422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f26d0 is same with the state(5) to be set 00:18:13.602 [2024-07-15 21:31:25.975449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45704 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45712 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45720 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45736 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46256 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46264 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46272 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46280 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46288 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46296 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46304 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.975965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.975974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.975983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46312 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.975995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.976007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.976015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.976024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45744 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.976036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.976049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.976057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.976066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45752 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.976082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.976097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.976106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.976115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45760 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.976127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.976138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.976147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.976157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45768 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.976180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.976189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.976198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45776 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.976210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.976223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.976232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.602 [2024-07-15 21:31:25.976240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45784 len:8 PRP1 0x0 PRP2 0x0 00:18:13.602 [2024-07-15 21:31:25.976252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.602 [2024-07-15 21:31:25.976264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.602 [2024-07-15 21:31:25.976273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.603 [2024-07-15 21:31:25.992179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45792 len:8 PRP1 0x0 PRP2 0x0 00:18:13.603 [2024-07-15 21:31:25.992225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.603 [2024-07-15 21:31:25.992253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:13.603 [2024-07-15 21:31:25.992267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:13.603 [2024-07-15 21:31:25.992281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45800 len:8 PRP1 0x0 PRP2 0x0 00:18:13.603 [2024-07-15 21:31:25.992299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.603 [2024-07-15 21:31:25.992365] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5f26d0 was disconnected and freed. reset controller. 00:18:13.603 [2024-07-15 21:31:25.992521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.603 [2024-07-15 21:31:25.992548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.603 [2024-07-15 21:31:25.992569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.603 [2024-07-15 21:31:25.992586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.603 [2024-07-15 21:31:25.992638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.603 [2024-07-15 21:31:25.992658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.603 [2024-07-15 21:31:25.992676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.603 [2024-07-15 21:31:25.992694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.603 [2024-07-15 21:31:25.992714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.603 [2024-07-15 21:31:25.992732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.603 [2024-07-15 21:31:25.992757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c100 is same with the state(5) to be set 00:18:13.603 [2024-07-15 21:31:25.994021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.603 [2024-07-15 21:31:25.994070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56c100 (9): Bad file descriptor 00:18:13.603 [2024-07-15 21:31:25.994490] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:13.603 [2024-07-15 21:31:25.994535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x56c100 with addr=10.0.0.2, port=4421 00:18:13.603 [2024-07-15 21:31:25.994557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c100 is same with the state(5) to be set 00:18:13.603 [2024-07-15 21:31:25.994594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56c100 (9): Bad file descriptor 00:18:13.603 [2024-07-15 21:31:25.994625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:13.603 [2024-07-15 21:31:25.994644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:13.603 [2024-07-15 21:31:25.994662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:13.603 [2024-07-15 21:31:25.994883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:13.603 [2024-07-15 21:31:25.994906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.603 [2024-07-15 21:31:36.027732] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:13.603 Received shutdown signal, test time was about 54.440738 seconds 00:18:13.603 00:18:13.603 Latency(us) 00:18:13.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.603 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.603 Verification LBA range: start 0x0 length 0x4000 00:18:13.603 Nvme0n1 : 54.44 9637.18 37.65 0.00 0.00 13266.71 947.51 7061253.96 00:18:13.603 =================================================================================================================== 00:18:13.603 Total : 9637.18 37.65 0.00 0.00 13266.71 947.51 7061253.96 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.603 rmmod nvme_tcp 00:18:13.603 rmmod nvme_fabrics 00:18:13.603 rmmod nvme_keyring 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80223 ']' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80223 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 80223 ']' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 80223 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80223 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.603 killing process with pid 80223 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80223' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 80223 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 80223 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:13.603 00:18:13.603 real 0m59.757s 00:18:13.603 user 2m40.499s 00:18:13.603 sys 0m22.442s 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.603 21:31:46 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:13.603 ************************************ 00:18:13.603 END TEST nvmf_host_multipath 00:18:13.603 ************************************ 00:18:13.603 21:31:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:13.603 21:31:46 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:13.603 21:31:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:13.603 21:31:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.603 21:31:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.603 ************************************ 00:18:13.603 START TEST nvmf_timeout 00:18:13.603 ************************************ 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:13.603 * Looking for test storage... 00:18:13.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.603 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.862 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.863 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.863 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.863 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.863 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.863 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.863 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:13.863 21:31:46 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:13.863 Cannot find device "nvmf_tgt_br" 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.863 Cannot find device "nvmf_tgt_br2" 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:13.863 Cannot find device "nvmf_tgt_br" 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:13.863 Cannot find device "nvmf_tgt_br2" 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:13.863 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:14.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:18:14.121 00:18:14.121 --- 10.0.0.2 ping statistics --- 00:18:14.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.121 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:14.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:18:14.121 00:18:14.121 --- 10.0.0.3 ping statistics --- 00:18:14.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.121 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:14.121 00:18:14.121 --- 10.0.0.1 ping statistics --- 00:18:14.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.121 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81373 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81373 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81373 ']' 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.121 21:31:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:14.121 [2024-07-15 21:31:47.459046] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:14.122 [2024-07-15 21:31:47.459115] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.380 [2024-07-15 21:31:47.601921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.380 [2024-07-15 21:31:47.701085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.380 [2024-07-15 21:31:47.701138] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.381 [2024-07-15 21:31:47.701148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.381 [2024-07-15 21:31:47.701156] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.381 [2024-07-15 21:31:47.701162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.381 [2024-07-15 21:31:47.701297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.381 [2024-07-15 21:31:47.701299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.381 [2024-07-15 21:31:47.743193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:14.946 21:31:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.946 21:31:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:14.946 21:31:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.946 21:31:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:14.946 21:31:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:15.204 21:31:48 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.204 21:31:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:15.204 21:31:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:15.204 [2024-07-15 21:31:48.523686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.204 21:31:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:15.462 Malloc0 00:18:15.462 21:31:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.719 21:31:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.977 [2024-07-15 21:31:49.299883] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81420 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81420 /var/tmp/bdevperf.sock 00:18:15.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81420 ']' 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:15.977 21:31:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:16.235 [2024-07-15 21:31:49.369703] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:16.235 [2024-07-15 21:31:49.369772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81420 ] 00:18:16.235 [2024-07-15 21:31:49.509206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.492 [2024-07-15 21:31:49.610174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.492 [2024-07-15 21:31:49.652688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:17.058 21:31:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.058 21:31:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:17.058 21:31:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:17.316 21:31:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:17.575 NVMe0n1 00:18:17.575 21:31:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81449 00:18:17.575 21:31:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:17.575 21:31:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:17.575 Running I/O for 10 seconds... 00:18:18.512 21:31:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.775 [2024-07-15 21:31:51.989521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.775 [2024-07-15 21:31:51.989727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.775 [2024-07-15 21:31:51.989960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.775 [2024-07-15 21:31:51.989971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.989980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.989990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.989999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.776 [2024-07-15 21:31:51.990657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.776 [2024-07-15 21:31:51.990676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.776 [2024-07-15 21:31:51.990703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.990974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.990991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.777 [2024-07-15 21:31:51.991175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.777 [2024-07-15 21:31:51.991314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.777 [2024-07-15 21:31:51.991324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.778 [2024-07-15 21:31:51.991333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.778 [2024-07-15 21:31:51.991826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.778 [2024-07-15 21:31:51.991846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.778 [2024-07-15 21:31:51.991866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.778 [2024-07-15 21:31:51.991886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.778 [2024-07-15 21:31:51.991906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.778 [2024-07-15 21:31:51.991926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.778 [2024-07-15 21:31:51.991937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.779 [2024-07-15 21:31:51.991947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.991957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:18.779 [2024-07-15 21:31:51.991966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.991977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a24d0 is same with the state(5) to be set 00:18:18.779 [2024-07-15 21:31:51.991989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.991996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91704 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92160 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92168 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92176 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92184 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92192 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92200 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92208 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92224 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92240 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92248 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92256 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92264 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.779 [2024-07-15 21:31:51.992493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.779 [2024-07-15 21:31:51.992500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.779 [2024-07-15 21:31:51.992508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92272 len:8 PRP1 0x0 PRP2 0x0 00:18:18.779 [2024-07-15 21:31:51.992517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.780 [2024-07-15 21:31:51.992526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:18.780 [2024-07-15 21:31:51.992534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.780 [2024-07-15 21:31:51.992541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92280 len:8 PRP1 0x0 PRP2 0x0 00:18:18.780 [2024-07-15 21:31:51.992551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.780 [2024-07-15 21:31:51.992598] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24a24d0 was disconnected and freed. reset controller. 00:18:18.780 [2024-07-15 21:31:51.992831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:18.780 [2024-07-15 21:31:51.992908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2457d40 (9): Bad file descriptor 00:18:18.780 [2024-07-15 21:31:51.993012] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.780 [2024-07-15 21:31:51.993034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2457d40 with addr=10.0.0.2, port=4420 00:18:18.780 [2024-07-15 21:31:51.993045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2457d40 is same with the state(5) to be set 00:18:18.780 21:31:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:18.780 [2024-07-15 21:31:52.012360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2457d40 (9): Bad file descriptor 00:18:18.780 [2024-07-15 21:31:52.012422] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:18.780 [2024-07-15 21:31:52.012442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.780 [2024-07-15 21:31:52.012454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:18.780 [2024-07-15 21:31:52.012468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.780 [2024-07-15 21:31:52.012493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.780 [2024-07-15 21:31:52.012505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.727 [2024-07-15 21:31:54.009419] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:20.727 [2024-07-15 21:31:54.009489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2457d40 with addr=10.0.0.2, port=4420 00:18:20.728 [2024-07-15 21:31:54.009503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2457d40 is same with the state(5) to be set 00:18:20.728 [2024-07-15 21:31:54.009528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2457d40 (9): Bad file descriptor 00:18:20.728 [2024-07-15 21:31:54.009545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.728 [2024-07-15 21:31:54.009553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:20.728 [2024-07-15 21:31:54.009564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.728 [2024-07-15 21:31:54.009588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:20.728 [2024-07-15 21:31:54.009597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.728 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:20.728 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:20.728 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:20.986 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:20.987 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:20.987 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:20.987 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:21.246 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:21.246 21:31:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:23.149 [2024-07-15 21:31:56.006499] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.149 [2024-07-15 21:31:56.006564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2457d40 with addr=10.0.0.2, port=4420 00:18:23.149 [2024-07-15 21:31:56.006579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2457d40 is same with the state(5) to be set 00:18:23.149 [2024-07-15 21:31:56.006603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2457d40 (9): Bad file descriptor 00:18:23.149 [2024-07-15 21:31:56.006619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:23.149 [2024-07-15 21:31:56.006628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:23.149 [2024-07-15 21:31:56.006638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:23.149 [2024-07-15 21:31:56.006659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:23.149 [2024-07-15 21:31:56.006669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:25.052 [2024-07-15 21:31:58.003473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:25.052 [2024-07-15 21:31:58.003550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:25.052 [2024-07-15 21:31:58.003566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:25.053 [2024-07-15 21:31:58.003585] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:25.053 [2024-07-15 21:31:58.003618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:25.620 00:18:25.620 Latency(us) 00:18:25.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.620 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:25.620 Verification LBA range: start 0x0 length 0x4000 00:18:25.620 NVMe0n1 : 8.11 1405.94 5.49 15.77 0.00 90039.63 3105.72 7061253.96 00:18:25.620 =================================================================================================================== 00:18:25.620 Total : 1405.94 5.49 15.77 0.00 90039.63 3105.72 7061253.96 00:18:25.620 0 00:18:26.187 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:26.187 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:26.187 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:26.444 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:26.444 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:26.444 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:26.444 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 81449 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81420 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81420 ']' 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81420 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81420 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:26.702 killing process with pid 81420 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81420' 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81420 00:18:26.702 Received shutdown signal, test time was about 9.097233 seconds 00:18:26.702 00:18:26.702 Latency(us) 00:18:26.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.702 =================================================================================================================== 00:18:26.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.702 21:31:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81420 00:18:26.960 21:32:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.960 [2024-07-15 21:32:00.316810] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81565 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81565 /var/tmp/bdevperf.sock 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81565 ']' 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:27.217 21:32:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:27.217 [2024-07-15 21:32:00.382794] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:27.217 [2024-07-15 21:32:00.382882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81565 ] 00:18:27.217 [2024-07-15 21:32:00.522316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.473 [2024-07-15 21:32:00.608519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.473 [2024-07-15 21:32:00.649635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:28.037 21:32:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.037 21:32:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:28.037 21:32:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:28.037 21:32:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:28.596 NVMe0n1 00:18:28.596 21:32:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.596 21:32:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81583 00:18:28.596 21:32:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:28.596 Running I/O for 10 seconds... 00:18:29.524 21:32:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.783 [2024-07-15 21:32:02.918361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.783 [2024-07-15 21:32:02.918409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.783 [2024-07-15 21:32:02.918430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.918439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.918459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.918478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.918497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.918516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.918534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.918553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.918983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.918993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.919001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.784 [2024-07-15 21:32:02.919020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.784 [2024-07-15 21:32:02.919221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.784 [2024-07-15 21:32:02.919230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.785 [2024-07-15 21:32:02.919898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.919981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.919993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.785 [2024-07-15 21:32:02.920003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.785 [2024-07-15 21:32:02.920011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:29.786 [2024-07-15 21:32:02.920339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.786 [2024-07-15 21:32:02.920357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.786 [2024-07-15 21:32:02.920375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.786 [2024-07-15 21:32:02.920394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.786 [2024-07-15 21:32:02.920412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.786 [2024-07-15 21:32:02.920435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.786 [2024-07-15 21:32:02.920454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.786 [2024-07-15 21:32:02.920471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23664d0 is same with the state(5) to be set 00:18:29.786 [2024-07-15 21:32:02.920491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92728 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93184 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93192 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93200 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93208 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93216 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93224 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93232 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93240 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.786 [2024-07-15 21:32:02.920791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93248 len:8 PRP1 0x0 PRP2 0x0 00:18:29.786 [2024-07-15 21:32:02.920799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.786 [2024-07-15 21:32:02.920807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.786 [2024-07-15 21:32:02.920814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.787 [2024-07-15 21:32:02.920829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93256 len:8 PRP1 0x0 PRP2 0x0 00:18:29.787 [2024-07-15 21:32:02.920838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.920846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.787 [2024-07-15 21:32:02.920853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.787 [2024-07-15 21:32:02.920860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93264 len:8 PRP1 0x0 PRP2 0x0 00:18:29.787 [2024-07-15 21:32:02.920868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.920877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.787 [2024-07-15 21:32:02.920883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.787 [2024-07-15 21:32:02.920890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93272 len:8 PRP1 0x0 PRP2 0x0 00:18:29.787 [2024-07-15 21:32:02.920899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.920907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.787 [2024-07-15 21:32:02.920914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.787 [2024-07-15 21:32:02.920921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93280 len:8 PRP1 0x0 PRP2 0x0 00:18:29.787 [2024-07-15 21:32:02.920929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.920937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.787 [2024-07-15 21:32:02.920944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.787 [2024-07-15 21:32:02.920953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93288 len:8 PRP1 0x0 PRP2 0x0 00:18:29.787 [2024-07-15 21:32:02.920961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.920969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.787 [2024-07-15 21:32:02.920976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.787 [2024-07-15 21:32:02.920983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93296 len:8 PRP1 0x0 PRP2 0x0 00:18:29.787 [2024-07-15 21:32:02.920991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.921000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:29.787 [2024-07-15 21:32:02.921007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:29.787 21:32:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:29.787 [2024-07-15 21:32:02.940518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93304 len:8 PRP1 0x0 PRP2 0x0 00:18:29.787 [2024-07-15 21:32:02.940564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.940675] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23664d0 was disconnected and freed. reset controller. 00:18:29.787 [2024-07-15 21:32:02.940866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.787 [2024-07-15 21:32:02.940884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.940900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.787 [2024-07-15 21:32:02.940911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.940923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.787 [2024-07-15 21:32:02.940936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.940948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.787 [2024-07-15 21:32:02.940967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.787 [2024-07-15 21:32:02.940981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bd40 is same with the state(5) to be set 00:18:29.787 [2024-07-15 21:32:02.941228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:29.787 [2024-07-15 21:32:02.941259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:29.787 [2024-07-15 21:32:02.941359] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:29.787 [2024-07-15 21:32:02.941378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231bd40 with addr=10.0.0.2, port=4420 00:18:29.787 [2024-07-15 21:32:02.941390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bd40 is same with the state(5) to be set 00:18:29.787 [2024-07-15 21:32:02.941408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:29.787 [2024-07-15 21:32:02.941425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:29.787 [2024-07-15 21:32:02.941436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:29.787 [2024-07-15 21:32:02.941448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:29.787 [2024-07-15 21:32:02.941468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.787 [2024-07-15 21:32:02.941480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.720 [2024-07-15 21:32:03.939980] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:30.720 [2024-07-15 21:32:03.940042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231bd40 with addr=10.0.0.2, port=4420 00:18:30.720 [2024-07-15 21:32:03.940056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bd40 is same with the state(5) to be set 00:18:30.720 [2024-07-15 21:32:03.940079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:30.720 [2024-07-15 21:32:03.940094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.720 [2024-07-15 21:32:03.940102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.720 [2024-07-15 21:32:03.940112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.720 [2024-07-15 21:32:03.940132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:30.720 [2024-07-15 21:32:03.940142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.720 21:32:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.977 [2024-07-15 21:32:04.157434] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.977 21:32:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 81583 00:18:31.912 [2024-07-15 21:32:04.956142] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.584 00:18:38.584 Latency(us) 00:18:38.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.584 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:38.585 Verification LBA range: start 0x0 length 0x4000 00:18:38.585 NVMe0n1 : 10.01 7003.29 27.36 0.00 0.00 18252.23 1237.02 3045502.66 00:18:38.585 =================================================================================================================== 00:18:38.585 Total : 7003.29 27.36 0.00 0.00 18252.23 1237.02 3045502.66 00:18:38.585 0 00:18:38.585 21:32:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81693 00:18:38.585 21:32:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.585 21:32:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:38.585 Running I/O for 10 seconds... 00:18:39.518 21:32:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.778 [2024-07-15 21:32:13.013180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:39.778 [2024-07-15 21:32:13.013241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.778 [2024-07-15 21:32:13.013264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.778 [2024-07-15 21:32:13.013273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.778 [2024-07-15 21:32:13.013285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.778 [2024-07-15 21:32:13.013294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.778 [2024-07-15 21:32:13.013305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.778 [2024-07-15 21:32:13.013314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.778 [2024-07-15 21:32:13.013324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.778 [2024-07-15 21:32:13.013332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.778 [2024-07-15 21:32:13.013342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.778 [2024-07-15 21:32:13.013350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.778 [2024-07-15 21:32:13.013360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.013985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.013994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.779 [2024-07-15 21:32:13.014155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.779 [2024-07-15 21:32:13.014164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.780 [2024-07-15 21:32:13.014830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.780 [2024-07-15 21:32:13.014840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.014986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.014995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.781 [2024-07-15 21:32:13.015597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.781 [2024-07-15 21:32:13.015606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2369470 is same with the state(5) to be set 00:18:39.781 [2024-07-15 21:32:13.015617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:39.782 [2024-07-15 21:32:13.015625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:39.782 [2024-07-15 21:32:13.015632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:18:39.782 [2024-07-15 21:32:13.015641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.782 [2024-07-15 21:32:13.015690] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2369470 was disconnected and freed. reset controller. 00:18:39.782 [2024-07-15 21:32:13.015880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:39.782 [2024-07-15 21:32:13.015954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:39.782 [2024-07-15 21:32:13.016041] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:39.782 [2024-07-15 21:32:13.016057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231bd40 with addr=10.0.0.2, port=4420 00:18:39.782 [2024-07-15 21:32:13.016067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bd40 is same with the state(5) to be set 00:18:39.782 [2024-07-15 21:32:13.016085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:39.782 [2024-07-15 21:32:13.016100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:39.782 [2024-07-15 21:32:13.016108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:39.782 [2024-07-15 21:32:13.016119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:39.782 [2024-07-15 21:32:13.016136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:39.782 [2024-07-15 21:32:13.016146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:39.782 21:32:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:40.738 [2024-07-15 21:32:14.014651] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.738 [2024-07-15 21:32:14.014713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231bd40 with addr=10.0.0.2, port=4420 00:18:40.738 [2024-07-15 21:32:14.014728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bd40 is same with the state(5) to be set 00:18:40.738 [2024-07-15 21:32:14.014752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:40.738 [2024-07-15 21:32:14.014768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:40.738 [2024-07-15 21:32:14.014777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:40.738 [2024-07-15 21:32:14.014787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:40.738 [2024-07-15 21:32:14.014811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.738 [2024-07-15 21:32:14.014830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:41.675 [2024-07-15 21:32:15.013325] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.675 [2024-07-15 21:32:15.013371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231bd40 with addr=10.0.0.2, port=4420 00:18:41.675 [2024-07-15 21:32:15.013384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bd40 is same with the state(5) to be set 00:18:41.675 [2024-07-15 21:32:15.013406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:41.675 [2024-07-15 21:32:15.013421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:41.675 [2024-07-15 21:32:15.013430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:41.675 [2024-07-15 21:32:15.013440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:41.675 [2024-07-15 21:32:15.013461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:41.675 [2024-07-15 21:32:15.013471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.051 [2024-07-15 21:32:16.014400] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.051 [2024-07-15 21:32:16.014463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231bd40 with addr=10.0.0.2, port=4420 00:18:43.051 [2024-07-15 21:32:16.014477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231bd40 is same with the state(5) to be set 00:18:43.051 [2024-07-15 21:32:16.014664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231bd40 (9): Bad file descriptor 00:18:43.051 [2024-07-15 21:32:16.014855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.051 [2024-07-15 21:32:16.014865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:43.051 [2024-07-15 21:32:16.014875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.051 [2024-07-15 21:32:16.017620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.051 [2024-07-15 21:32:16.017650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.051 21:32:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.051 [2024-07-15 21:32:16.218042] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.051 21:32:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 81693 00:18:43.988 [2024-07-15 21:32:17.045062] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:49.350 00:18:49.350 Latency(us) 00:18:49.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.350 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.350 Verification LBA range: start 0x0 length 0x4000 00:18:49.350 NVMe0n1 : 10.01 5545.46 21.66 5143.68 0.00 11956.69 1381.78 3018551.31 00:18:49.350 =================================================================================================================== 00:18:49.350 Total : 5545.46 21.66 5143.68 0.00 11956.69 0.00 3018551.31 00:18:49.350 0 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81565 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81565 ']' 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81565 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81565 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:49.350 killing process with pid 81565 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81565' 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81565 00:18:49.350 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.350 00:18:49.350 Latency(us) 00:18:49.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.350 =================================================================================================================== 00:18:49.350 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.350 21:32:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81565 00:18:49.350 21:32:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:49.350 21:32:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81806 00:18:49.350 21:32:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81806 /var/tmp/bdevperf.sock 00:18:49.350 21:32:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 81806 ']' 00:18:49.350 21:32:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.350 21:32:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.350 21:32:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.351 21:32:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.351 21:32:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:49.351 [2024-07-15 21:32:22.184671] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:18:49.351 [2024-07-15 21:32:22.184739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81806 ] 00:18:49.351 [2024-07-15 21:32:22.324881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.351 [2024-07-15 21:32:22.423131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.351 [2024-07-15 21:32:22.464719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:49.916 21:32:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.916 21:32:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:18:49.916 21:32:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81806 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:49.916 21:32:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81818 00:18:49.916 21:32:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:49.916 21:32:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:50.483 NVMe0n1 00:18:50.483 21:32:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81865 00:18:50.483 21:32:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:50.483 21:32:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.483 Running I/O for 10 seconds... 00:18:51.421 21:32:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.421 [2024-07-15 21:32:24.775186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.775999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.776007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.776015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.776024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.776033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.421 [2024-07-15 21:32:24.776041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462d0 is same with the state(5) to be set 00:18:51.422 [2024-07-15 21:32:24.776484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.776981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.776997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.422 [2024-07-15 21:32:24.777010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.422 [2024-07-15 21:32:24.777026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.777982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.777996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.778011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.778024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.778039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.778054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.778069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.423 [2024-07-15 21:32:24.778083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.423 [2024-07-15 21:32:24.778099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.424 [2024-07-15 21:32:24.778918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.424 [2024-07-15 21:32:24.778929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.778949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.778960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.778969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.778979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.778990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.425 [2024-07-15 21:32:24.779460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.425 [2024-07-15 21:32:24.779470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.426 [2024-07-15 21:32:24.779734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1401310 is same with the state(5) to be set 00:18:51.426 [2024-07-15 21:32:24.779756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.426 [2024-07-15 21:32:24.779764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.426 [2024-07-15 21:32:24.779772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18896 len:8 PRP1 0x0 PRP2 0x0 00:18:51.426 [2024-07-15 21:32:24.779781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779849] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1401310 was disconnected and freed. reset controller. 00:18:51.426 [2024-07-15 21:32:24.779921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.426 [2024-07-15 21:32:24.779934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.426 [2024-07-15 21:32:24.779953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.426 [2024-07-15 21:32:24.779973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.426 [2024-07-15 21:32:24.779990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.426 [2024-07-15 21:32:24.779999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392c00 is same with the state(5) to be set 00:18:51.426 [2024-07-15 21:32:24.780213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:51.426 [2024-07-15 21:32:24.780235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392c00 (9): Bad file descriptor 00:18:51.426 [2024-07-15 21:32:24.780321] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.426 [2024-07-15 21:32:24.780338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1392c00 with addr=10.0.0.2, port=4420 00:18:51.426 [2024-07-15 21:32:24.780347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392c00 is same with the state(5) to be set 00:18:51.426 [2024-07-15 21:32:24.780362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392c00 (9): Bad file descriptor 00:18:51.426 [2024-07-15 21:32:24.780376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:51.426 [2024-07-15 21:32:24.780386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:51.426 [2024-07-15 21:32:24.780396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:51.426 [2024-07-15 21:32:24.780414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.701 21:32:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 81865 00:18:51.701 [2024-07-15 21:32:24.802619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:53.607 [2024-07-15 21:32:26.799636] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.607 [2024-07-15 21:32:26.799702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1392c00 with addr=10.0.0.2, port=4420 00:18:53.607 [2024-07-15 21:32:26.799717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392c00 is same with the state(5) to be set 00:18:53.607 [2024-07-15 21:32:26.799741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392c00 (9): Bad file descriptor 00:18:53.607 [2024-07-15 21:32:26.799758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:53.607 [2024-07-15 21:32:26.799767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:53.607 [2024-07-15 21:32:26.799779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:53.607 [2024-07-15 21:32:26.799803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:53.607 [2024-07-15 21:32:26.799815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:55.556 [2024-07-15 21:32:28.796777] uring.c: 587:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.556 [2024-07-15 21:32:28.796852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1392c00 with addr=10.0.0.2, port=4420 00:18:55.556 [2024-07-15 21:32:28.796868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1392c00 is same with the state(5) to be set 00:18:55.556 [2024-07-15 21:32:28.796893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1392c00 (9): Bad file descriptor 00:18:55.556 [2024-07-15 21:32:28.796910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.556 [2024-07-15 21:32:28.796922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:55.556 [2024-07-15 21:32:28.796933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.556 [2024-07-15 21:32:28.796959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.556 [2024-07-15 21:32:28.796971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.453 [2024-07-15 21:32:30.793802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.453 [2024-07-15 21:32:30.793866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.453 [2024-07-15 21:32:30.793878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:57.453 [2024-07-15 21:32:30.793888] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:57.453 [2024-07-15 21:32:30.793910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.838 00:18:58.838 Latency(us) 00:18:58.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.839 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:58.839 NVMe0n1 : 8.11 2583.06 10.09 15.78 0.00 49382.87 6369.36 7061253.96 00:18:58.839 =================================================================================================================== 00:18:58.839 Total : 2583.06 10.09 15.78 0.00 49382.87 6369.36 7061253.96 00:18:58.839 0 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.839 Attaching 5 probes... 00:18:58.839 1147.499905: reset bdev controller NVMe0 00:18:58.839 1147.561353: reconnect bdev controller NVMe0 00:18:58.839 3166.792714: reconnect delay bdev controller NVMe0 00:18:58.839 3166.817317: reconnect bdev controller NVMe0 00:18:58.839 5163.924886: reconnect delay bdev controller NVMe0 00:18:58.839 5163.954394: reconnect bdev controller NVMe0 00:18:58.839 7161.067558: reconnect delay bdev controller NVMe0 00:18:58.839 7161.092227: reconnect bdev controller NVMe0 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 81818 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81806 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81806 ']' 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81806 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81806 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:58.839 killing process with pid 81806 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81806' 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81806 00:18:58.839 Received shutdown signal, test time was about 8.184282 seconds 00:18:58.839 00:18:58.839 Latency(us) 00:18:58.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.839 =================================================================================================================== 00:18:58.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.839 21:32:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81806 00:18:58.839 21:32:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.098 rmmod nvme_tcp 00:18:59.098 rmmod nvme_fabrics 00:18:59.098 rmmod nvme_keyring 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81373 ']' 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81373 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 81373 ']' 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 81373 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81373 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81373' 00:18:59.098 killing process with pid 81373 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 81373 00:18:59.098 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 81373 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.364 21:32:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:59.364 ************************************ 00:18:59.364 END TEST nvmf_timeout 00:18:59.364 ************************************ 00:18:59.364 00:18:59.364 real 0m45.911s 00:18:59.365 user 2m12.679s 00:18:59.365 sys 0m6.707s 00:18:59.365 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.365 21:32:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:59.624 21:32:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:59.624 21:32:32 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:18:59.624 21:32:32 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:18:59.624 21:32:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.624 21:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.624 21:32:32 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:18:59.624 00:18:59.624 real 11m12.676s 00:18:59.624 user 26m26.475s 00:18:59.624 sys 3m23.496s 00:18:59.624 21:32:32 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.624 21:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.624 ************************************ 00:18:59.624 END TEST nvmf_tcp 00:18:59.624 ************************************ 00:18:59.624 21:32:32 -- common/autotest_common.sh@1142 -- # return 0 00:18:59.624 21:32:32 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:18:59.624 21:32:32 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:59.624 21:32:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:59.624 21:32:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.624 21:32:32 -- common/autotest_common.sh@10 -- # set +x 00:18:59.624 ************************************ 00:18:59.624 START TEST nvmf_dif 00:18:59.624 ************************************ 00:18:59.624 21:32:32 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:59.884 * Looking for test storage... 00:18:59.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:59.884 21:32:33 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.884 21:32:33 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.884 21:32:33 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.884 21:32:33 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.884 21:32:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.884 21:32:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.884 21:32:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.884 21:32:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:18:59.884 21:32:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.884 21:32:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:18:59.884 21:32:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:59.884 21:32:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:59.884 21:32:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:18:59.884 21:32:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.884 21:32:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:59.884 21:32:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:59.884 Cannot find device "nvmf_tgt_br" 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@155 -- # true 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.884 Cannot find device "nvmf_tgt_br2" 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@156 -- # true 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:59.884 Cannot find device "nvmf_tgt_br" 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@158 -- # true 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:59.884 Cannot find device "nvmf_tgt_br2" 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@159 -- # true 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:59.884 21:32:33 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.148 21:32:33 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.406 21:32:33 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.406 21:32:33 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.406 21:32:33 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:00.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:00.406 00:19:00.406 --- 10.0.0.2 ping statistics --- 00:19:00.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.406 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:00.406 21:32:33 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:00.406 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.406 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:19:00.406 00:19:00.406 --- 10.0.0.3 ping statistics --- 00:19:00.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.406 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:00.406 21:32:33 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:19:00.407 00:19:00.407 --- 10.0.0.1 ping statistics --- 00:19:00.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.407 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:00.407 21:32:33 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.407 21:32:33 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:00.407 21:32:33 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:00.407 21:32:33 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:00.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.971 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:00.971 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:00.971 21:32:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:00.971 21:32:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82300 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82300 00:19:00.971 21:32:34 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 82300 ']' 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.971 21:32:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:00.971 [2024-07-15 21:32:34.226207] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:19:00.971 [2024-07-15 21:32:34.226282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.286 [2024-07-15 21:32:34.368174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.286 [2024-07-15 21:32:34.464754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.286 [2024-07-15 21:32:34.464797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.286 [2024-07-15 21:32:34.464807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.286 [2024-07-15 21:32:34.464815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.286 [2024-07-15 21:32:34.464833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.286 [2024-07-15 21:32:34.464857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.286 [2024-07-15 21:32:34.505537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:19:01.853 21:32:35 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:01.853 21:32:35 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.853 21:32:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:01.853 21:32:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:01.853 [2024-07-15 21:32:35.145247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.853 21:32:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.853 21:32:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:01.853 ************************************ 00:19:01.853 START TEST fio_dif_1_default 00:19:01.853 ************************************ 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:01.853 bdev_null0 00:19:01.853 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:01.854 [2024-07-15 21:32:35.209243] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.854 { 00:19:01.854 "params": { 00:19:01.854 "name": "Nvme$subsystem", 00:19:01.854 "trtype": "$TEST_TRANSPORT", 00:19:01.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.854 "adrfam": "ipv4", 00:19:01.854 "trsvcid": "$NVMF_PORT", 00:19:01.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.854 "hdgst": ${hdgst:-false}, 00:19:01.854 "ddgst": ${ddgst:-false} 00:19:01.854 }, 00:19:01.854 "method": "bdev_nvme_attach_controller" 00:19:01.854 } 00:19:01.854 EOF 00:19:01.854 )") 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.854 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:02.112 "params": { 00:19:02.112 "name": "Nvme0", 00:19:02.112 "trtype": "tcp", 00:19:02.112 "traddr": "10.0.0.2", 00:19:02.112 "adrfam": "ipv4", 00:19:02.112 "trsvcid": "4420", 00:19:02.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:02.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:02.112 "hdgst": false, 00:19:02.112 "ddgst": false 00:19:02.112 }, 00:19:02.112 "method": "bdev_nvme_attach_controller" 00:19:02.112 }' 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:02.112 21:32:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:02.112 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:02.112 fio-3.35 00:19:02.112 Starting 1 thread 00:19:14.305 00:19:14.305 filename0: (groupid=0, jobs=1): err= 0: pid=82368: Mon Jul 15 21:32:45 2024 00:19:14.305 read: IOPS=10.7k, BW=41.8MiB/s (43.8MB/s)(418MiB/10001msec) 00:19:14.305 slat (usec): min=5, max=368, avg= 7.78, stdev= 7.19 00:19:14.305 clat (usec): min=291, max=4411, avg=352.12, stdev=96.14 00:19:14.305 lat (usec): min=297, max=4419, avg=359.90, stdev=99.45 00:19:14.305 clat percentiles (usec): 00:19:14.305 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:19:14.305 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:19:14.305 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 429], 00:19:14.305 | 99.00th=[ 570], 99.50th=[ 906], 99.90th=[ 1680], 99.95th=[ 1893], 00:19:14.305 | 99.99th=[ 2311] 00:19:14.305 bw ( KiB/s): min=31904, max=48160, per=100.00%, avg=43134.32, stdev=4731.27, samples=19 00:19:14.305 iops : min= 7976, max=12040, avg=10783.58, stdev=1182.82, samples=19 00:19:14.305 lat (usec) : 500=98.54%, 750=0.78%, 1000=0.19% 00:19:14.305 lat (msec) : 2=0.45%, 4=0.02%, 10=0.01% 00:19:14.305 cpu : usr=80.80%, sys=17.20%, ctx=118, majf=0, minf=0 00:19:14.305 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.305 issued rwts: total=107056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.305 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:14.305 00:19:14.305 Run status group 0 (all jobs): 00:19:14.305 READ: bw=41.8MiB/s (43.8MB/s), 41.8MiB/s-41.8MiB/s (43.8MB/s-43.8MB/s), io=418MiB (439MB), run=10001-10001msec 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.305 00:19:14.305 real 0m11.031s 00:19:14.305 user 0m8.735s 00:19:14.305 sys 0m2.012s 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:14.305 ************************************ 00:19:14.305 END TEST fio_dif_1_default 00:19:14.305 ************************************ 00:19:14.305 21:32:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:14.305 21:32:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:14.305 21:32:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:14.305 21:32:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:14.305 ************************************ 00:19:14.305 START TEST fio_dif_1_multi_subsystems 00:19:14.305 ************************************ 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.305 bdev_null0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.305 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.306 [2024-07-15 21:32:46.274445] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.306 bdev_null1 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:14.306 { 00:19:14.306 "params": { 00:19:14.306 "name": "Nvme$subsystem", 00:19:14.306 "trtype": "$TEST_TRANSPORT", 00:19:14.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.306 "adrfam": "ipv4", 00:19:14.306 "trsvcid": "$NVMF_PORT", 00:19:14.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.306 "hdgst": ${hdgst:-false}, 00:19:14.306 "ddgst": ${ddgst:-false} 00:19:14.306 }, 00:19:14.306 "method": "bdev_nvme_attach_controller" 00:19:14.306 } 00:19:14.306 EOF 00:19:14.306 )") 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:14.306 { 00:19:14.306 "params": { 00:19:14.306 "name": "Nvme$subsystem", 00:19:14.306 "trtype": "$TEST_TRANSPORT", 00:19:14.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.306 "adrfam": "ipv4", 00:19:14.306 "trsvcid": "$NVMF_PORT", 00:19:14.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.306 "hdgst": ${hdgst:-false}, 00:19:14.306 "ddgst": ${ddgst:-false} 00:19:14.306 }, 00:19:14.306 "method": "bdev_nvme_attach_controller" 00:19:14.306 } 00:19:14.306 EOF 00:19:14.306 )") 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:14.306 "params": { 00:19:14.306 "name": "Nvme0", 00:19:14.306 "trtype": "tcp", 00:19:14.306 "traddr": "10.0.0.2", 00:19:14.306 "adrfam": "ipv4", 00:19:14.306 "trsvcid": "4420", 00:19:14.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:14.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:14.306 "hdgst": false, 00:19:14.306 "ddgst": false 00:19:14.306 }, 00:19:14.306 "method": "bdev_nvme_attach_controller" 00:19:14.306 },{ 00:19:14.306 "params": { 00:19:14.306 "name": "Nvme1", 00:19:14.306 "trtype": "tcp", 00:19:14.306 "traddr": "10.0.0.2", 00:19:14.306 "adrfam": "ipv4", 00:19:14.306 "trsvcid": "4420", 00:19:14.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.306 "hdgst": false, 00:19:14.306 "ddgst": false 00:19:14.306 }, 00:19:14.306 "method": "bdev_nvme_attach_controller" 00:19:14.306 }' 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:14.306 21:32:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:14.306 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:14.306 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:14.306 fio-3.35 00:19:14.306 Starting 2 threads 00:19:24.287 00:19:24.287 filename0: (groupid=0, jobs=1): err= 0: pid=82521: Mon Jul 15 21:32:57 2024 00:19:24.287 read: IOPS=6328, BW=24.7MiB/s (25.9MB/s)(247MiB/10001msec) 00:19:24.287 slat (nsec): min=5808, max=87603, avg=11325.46, stdev=4067.73 00:19:24.287 clat (usec): min=303, max=1149, avg=601.91, stdev=42.23 00:19:24.287 lat (usec): min=309, max=1158, avg=613.23, stdev=44.50 00:19:24.287 clat percentiles (usec): 00:19:24.287 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 578], 00:19:24.287 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 611], 00:19:24.287 | 70.00th=[ 619], 80.00th=[ 627], 90.00th=[ 635], 95.00th=[ 652], 00:19:24.287 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 898], 99.95th=[ 947], 00:19:24.287 | 99.99th=[ 1123] 00:19:24.287 bw ( KiB/s): min=22720, max=25984, per=49.75%, avg=25332.89, stdev=687.87, samples=19 00:19:24.287 iops : min= 5680, max= 6496, avg=6333.21, stdev=171.97, samples=19 00:19:24.287 lat (usec) : 500=0.14%, 750=98.52%, 1000=1.31% 00:19:24.287 lat (msec) : 2=0.03% 00:19:24.287 cpu : usr=89.54%, sys=9.36%, ctx=74, majf=0, minf=0 00:19:24.287 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.287 issued rwts: total=63288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.287 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:24.287 filename1: (groupid=0, jobs=1): err= 0: pid=82522: Mon Jul 15 21:32:57 2024 00:19:24.287 read: IOPS=6402, BW=25.0MiB/s (26.2MB/s)(250MiB/10001msec) 00:19:24.287 slat (usec): min=5, max=158, avg=10.73, stdev= 3.50 00:19:24.287 clat (usec): min=306, max=2098, avg=596.11, stdev=43.47 00:19:24.287 lat (usec): min=312, max=2257, avg=606.84, stdev=44.52 00:19:24.287 clat percentiles (usec): 00:19:24.287 | 1.00th=[ 343], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:19:24.287 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 594], 60.00th=[ 603], 00:19:24.287 | 70.00th=[ 611], 80.00th=[ 619], 90.00th=[ 627], 95.00th=[ 644], 00:19:24.287 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 758], 99.95th=[ 1037], 00:19:24.287 | 99.99th=[ 1844] 00:19:24.287 bw ( KiB/s): min=24608, max=28160, per=50.36%, avg=25646.16, stdev=675.69, samples=19 00:19:24.287 iops : min= 6152, max= 7040, avg=6411.53, stdev=168.93, samples=19 00:19:24.287 lat (usec) : 500=1.25%, 750=98.64%, 1000=0.05% 00:19:24.287 lat (msec) : 2=0.05%, 4=0.01% 00:19:24.287 cpu : usr=89.61%, sys=9.40%, ctx=10, majf=0, minf=0 00:19:24.287 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.287 issued rwts: total=64032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.287 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:24.287 00:19:24.287 Run status group 0 (all jobs): 00:19:24.287 READ: bw=49.7MiB/s (52.1MB/s), 24.7MiB/s-25.0MiB/s (25.9MB/s-26.2MB/s), io=497MiB (522MB), run=10001-10001msec 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 00:19:24.287 real 0m11.096s 00:19:24.287 user 0m18.619s 00:19:24.287 sys 0m2.173s 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 ************************************ 00:19:24.287 END TEST fio_dif_1_multi_subsystems 00:19:24.287 ************************************ 00:19:24.287 21:32:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:24.287 21:32:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:24.287 21:32:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:24.287 21:32:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 ************************************ 00:19:24.287 START TEST fio_dif_rand_params 00:19:24.287 ************************************ 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 bdev_null0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:24.287 [2024-07-15 21:32:57.463707] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.287 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:24.287 { 00:19:24.287 "params": { 00:19:24.287 "name": "Nvme$subsystem", 00:19:24.288 "trtype": "$TEST_TRANSPORT", 00:19:24.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.288 "adrfam": "ipv4", 00:19:24.288 "trsvcid": "$NVMF_PORT", 00:19:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.288 "hdgst": ${hdgst:-false}, 00:19:24.288 "ddgst": ${ddgst:-false} 00:19:24.288 }, 00:19:24.288 "method": "bdev_nvme_attach_controller" 00:19:24.288 } 00:19:24.288 EOF 00:19:24.288 )") 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:24.288 "params": { 00:19:24.288 "name": "Nvme0", 00:19:24.288 "trtype": "tcp", 00:19:24.288 "traddr": "10.0.0.2", 00:19:24.288 "adrfam": "ipv4", 00:19:24.288 "trsvcid": "4420", 00:19:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:24.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:24.288 "hdgst": false, 00:19:24.288 "ddgst": false 00:19:24.288 }, 00:19:24.288 "method": "bdev_nvme_attach_controller" 00:19:24.288 }' 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:24.288 21:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:24.546 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:24.546 ... 00:19:24.546 fio-3.35 00:19:24.546 Starting 3 threads 00:19:31.117 00:19:31.117 filename0: (groupid=0, jobs=1): err= 0: pid=82683: Mon Jul 15 21:33:03 2024 00:19:31.117 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5006msec) 00:19:31.117 slat (nsec): min=5897, max=73221, avg=30331.33, stdev=16527.40 00:19:31.117 clat (usec): min=8058, max=11119, avg=10002.26, stdev=225.72 00:19:31.117 lat (usec): min=8065, max=11155, avg=10032.59, stdev=230.31 00:19:31.117 clat percentiles (usec): 00:19:31.117 | 1.00th=[ 9765], 5.00th=[ 9765], 10.00th=[ 9765], 20.00th=[ 9765], 00:19:31.117 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[10028], 00:19:31.117 | 70.00th=[10159], 80.00th=[10159], 90.00th=[10290], 95.00th=[10290], 00:19:31.117 | 99.00th=[10552], 99.50th=[11076], 99.90th=[11076], 99.95th=[11076], 00:19:31.117 | 99.99th=[11076] 00:19:31.117 bw ( KiB/s): min=36864, max=39168, per=33.35%, avg=38144.00, stdev=768.00, samples=9 00:19:31.117 iops : min= 288, max= 306, avg=298.00, stdev= 6.00, samples=9 00:19:31.117 lat (msec) : 10=53.66%, 20=46.34% 00:19:31.117 cpu : usr=95.38%, sys=4.08%, ctx=46, majf=0, minf=0 00:19:31.117 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.117 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:31.117 filename0: (groupid=0, jobs=1): err= 0: pid=82684: Mon Jul 15 21:33:03 2024 00:19:31.117 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5006msec) 00:19:31.117 slat (nsec): min=6448, max=73293, avg=29916.80, stdev=15734.98 00:19:31.117 clat (usec): min=8065, max=11092, avg=10001.79, stdev=224.59 00:19:31.117 lat (usec): min=8071, max=11127, avg=10031.71, stdev=228.92 00:19:31.117 clat percentiles (usec): 00:19:31.117 | 1.00th=[ 9765], 5.00th=[ 9765], 10.00th=[ 9765], 20.00th=[ 9765], 00:19:31.117 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[ 9896], 60.00th=[10028], 00:19:31.117 | 70.00th=[10159], 80.00th=[10159], 90.00th=[10290], 95.00th=[10290], 00:19:31.117 | 99.00th=[10552], 99.50th=[10945], 99.90th=[11076], 99.95th=[11076], 00:19:31.117 | 99.99th=[11076] 00:19:31.117 bw ( KiB/s): min=36864, max=39168, per=33.35%, avg=38144.00, stdev=768.00, samples=9 00:19:31.117 iops : min= 288, max= 306, avg=298.00, stdev= 6.00, samples=9 00:19:31.117 lat (msec) : 10=54.53%, 20=45.47% 00:19:31.117 cpu : usr=95.42%, sys=3.78%, ctx=11, majf=0, minf=9 00:19:31.117 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.117 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:31.117 filename0: (groupid=0, jobs=1): err= 0: pid=82685: Mon Jul 15 21:33:03 2024 00:19:31.117 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(186MiB/5001msec) 00:19:31.118 slat (nsec): min=5850, max=78677, avg=24911.27, stdev=16570.11 00:19:31.118 clat (usec): min=6738, max=11132, avg=10006.23, stdev=268.03 00:19:31.118 lat (usec): min=6748, max=11165, avg=10031.14, stdev=275.24 00:19:31.118 clat percentiles (usec): 00:19:31.118 | 1.00th=[ 9765], 5.00th=[ 9765], 10.00th=[ 9765], 20.00th=[ 9896], 00:19:31.118 | 30.00th=[ 9896], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10028], 00:19:31.118 | 70.00th=[10159], 80.00th=[10159], 90.00th=[10290], 95.00th=[10290], 00:19:31.118 | 99.00th=[10552], 99.50th=[10945], 99.90th=[11076], 99.95th=[11076], 00:19:31.118 | 99.99th=[11076] 00:19:31.118 bw ( KiB/s): min=36864, max=39168, per=33.43%, avg=38229.33, stdev=746.36, samples=9 00:19:31.118 iops : min= 288, max= 306, avg=298.67, stdev= 5.83, samples=9 00:19:31.118 lat (msec) : 10=55.13%, 20=44.87% 00:19:31.118 cpu : usr=92.32%, sys=7.16%, ctx=7, majf=0, minf=0 00:19:31.118 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.118 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.118 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:31.118 00:19:31.118 Run status group 0 (all jobs): 00:19:31.118 READ: bw=112MiB/s (117MB/s), 37.2MiB/s-37.3MiB/s (39.0MB/s-39.1MB/s), io=559MiB (586MB), run=5001-5006msec 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 bdev_null0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 [2024-07-15 21:33:03.526425] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 bdev_null1 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 bdev_null2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.118 { 00:19:31.118 "params": { 00:19:31.118 "name": "Nvme$subsystem", 00:19:31.118 "trtype": "$TEST_TRANSPORT", 00:19:31.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.118 "adrfam": "ipv4", 00:19:31.118 "trsvcid": "$NVMF_PORT", 00:19:31.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.118 "hdgst": ${hdgst:-false}, 00:19:31.118 "ddgst": ${ddgst:-false} 00:19:31.118 }, 00:19:31.118 "method": "bdev_nvme_attach_controller" 00:19:31.118 } 00:19:31.118 EOF 00:19:31.118 )") 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.118 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.119 { 00:19:31.119 "params": { 00:19:31.119 "name": "Nvme$subsystem", 00:19:31.119 "trtype": "$TEST_TRANSPORT", 00:19:31.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.119 "adrfam": "ipv4", 00:19:31.119 "trsvcid": "$NVMF_PORT", 00:19:31.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.119 "hdgst": ${hdgst:-false}, 00:19:31.119 "ddgst": ${ddgst:-false} 00:19:31.119 }, 00:19:31.119 "method": "bdev_nvme_attach_controller" 00:19:31.119 } 00:19:31.119 EOF 00:19:31.119 )") 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:31.119 { 00:19:31.119 "params": { 00:19:31.119 "name": "Nvme$subsystem", 00:19:31.119 "trtype": "$TEST_TRANSPORT", 00:19:31.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.119 "adrfam": "ipv4", 00:19:31.119 "trsvcid": "$NVMF_PORT", 00:19:31.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.119 "hdgst": ${hdgst:-false}, 00:19:31.119 "ddgst": ${ddgst:-false} 00:19:31.119 }, 00:19:31.119 "method": "bdev_nvme_attach_controller" 00:19:31.119 } 00:19:31.119 EOF 00:19:31.119 )") 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:31.119 "params": { 00:19:31.119 "name": "Nvme0", 00:19:31.119 "trtype": "tcp", 00:19:31.119 "traddr": "10.0.0.2", 00:19:31.119 "adrfam": "ipv4", 00:19:31.119 "trsvcid": "4420", 00:19:31.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:31.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:31.119 "hdgst": false, 00:19:31.119 "ddgst": false 00:19:31.119 }, 00:19:31.119 "method": "bdev_nvme_attach_controller" 00:19:31.119 },{ 00:19:31.119 "params": { 00:19:31.119 "name": "Nvme1", 00:19:31.119 "trtype": "tcp", 00:19:31.119 "traddr": "10.0.0.2", 00:19:31.119 "adrfam": "ipv4", 00:19:31.119 "trsvcid": "4420", 00:19:31.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.119 "hdgst": false, 00:19:31.119 "ddgst": false 00:19:31.119 }, 00:19:31.119 "method": "bdev_nvme_attach_controller" 00:19:31.119 },{ 00:19:31.119 "params": { 00:19:31.119 "name": "Nvme2", 00:19:31.119 "trtype": "tcp", 00:19:31.119 "traddr": "10.0.0.2", 00:19:31.119 "adrfam": "ipv4", 00:19:31.119 "trsvcid": "4420", 00:19:31.119 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:31.119 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:31.119 "hdgst": false, 00:19:31.119 "ddgst": false 00:19:31.119 }, 00:19:31.119 "method": "bdev_nvme_attach_controller" 00:19:31.119 }' 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:31.119 21:33:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:31.119 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:31.119 ... 00:19:31.119 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:31.119 ... 00:19:31.119 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:31.119 ... 00:19:31.119 fio-3.35 00:19:31.119 Starting 24 threads 00:19:43.334 00:19:43.334 filename0: (groupid=0, jobs=1): err= 0: pid=82780: Mon Jul 15 21:33:14 2024 00:19:43.334 read: IOPS=272, BW=1092KiB/s (1118kB/s)(10.7MiB/10058msec) 00:19:43.334 slat (usec): min=6, max=3986, avg=21.52, stdev=93.53 00:19:43.334 clat (usec): min=1638, max=126522, avg=58418.31, stdev=18113.69 00:19:43.334 lat (usec): min=1657, max=126531, avg=58439.83, stdev=18113.55 00:19:43.334 clat percentiles (msec): 00:19:43.334 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 46], 00:19:43.334 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:19:43.334 | 70.00th=[ 66], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 88], 00:19:43.334 | 99.00th=[ 100], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 122], 00:19:43.334 | 99.99th=[ 127] 00:19:43.334 bw ( KiB/s): min= 968, max= 1784, per=4.22%, avg=1093.10, stdev=174.12, samples=20 00:19:43.334 iops : min= 242, max= 446, avg=273.25, stdev=43.53, samples=20 00:19:43.334 lat (msec) : 2=0.58%, 10=2.26%, 20=0.66%, 50=23.28%, 100=72.31% 00:19:43.334 lat (msec) : 250=0.91% 00:19:43.334 cpu : usr=42.76%, sys=1.96%, ctx=1565, majf=0, minf=9 00:19:43.334 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=79.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:43.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 complete : 0=0.0%, 4=88.6%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 issued rwts: total=2745,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.334 filename0: (groupid=0, jobs=1): err= 0: pid=82781: Mon Jul 15 21:33:14 2024 00:19:43.334 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.3MiB/10034msec) 00:19:43.334 slat (usec): min=6, max=8026, avg=21.63, stdev=208.49 00:19:43.334 clat (msec): min=12, max=123, avg=60.98, stdev=16.49 00:19:43.334 lat (msec): min=12, max=123, avg=61.00, stdev=16.49 00:19:43.334 clat percentiles (msec): 00:19:43.334 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:19:43.334 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 63], 00:19:43.334 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:19:43.334 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 111], 99.95th=[ 122], 00:19:43.334 | 99.99th=[ 125] 00:19:43.334 bw ( KiB/s): min= 784, max= 1280, per=4.03%, avg=1044.40, stdev=101.71, samples=20 00:19:43.334 iops : min= 196, max= 320, avg=261.10, stdev=25.43, samples=20 00:19:43.334 lat (msec) : 20=0.61%, 50=25.43%, 100=72.14%, 250=1.83% 00:19:43.334 cpu : usr=35.00%, sys=2.36%, ctx=1146, majf=0, minf=9 00:19:43.334 IO depths : 1=0.1%, 2=0.8%, 4=3.5%, 8=79.0%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:43.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 complete : 0=0.0%, 4=88.8%, 8=10.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 issued rwts: total=2627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.334 filename0: (groupid=0, jobs=1): err= 0: pid=82782: Mon Jul 15 21:33:14 2024 00:19:43.334 read: IOPS=284, BW=1137KiB/s (1164kB/s)(11.1MiB/10001msec) 00:19:43.334 slat (usec): min=3, max=8096, avg=25.03, stdev=239.14 00:19:43.334 clat (usec): min=1542, max=139865, avg=56199.38, stdev=16629.28 00:19:43.334 lat (usec): min=1549, max=139876, avg=56224.41, stdev=16629.91 00:19:43.334 clat percentiles (msec): 00:19:43.334 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 42], 00:19:43.334 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 61], 00:19:43.334 | 70.00th=[ 64], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 87], 00:19:43.334 | 99.00th=[ 104], 99.50th=[ 116], 99.90th=[ 116], 99.95th=[ 140], 00:19:43.334 | 99.99th=[ 140] 00:19:43.334 bw ( KiB/s): min= 1016, max= 1232, per=4.34%, avg=1125.47, stdev=74.10, samples=19 00:19:43.334 iops : min= 254, max= 308, avg=281.37, stdev=18.52, samples=19 00:19:43.334 lat (msec) : 2=0.21%, 4=0.25%, 10=0.21%, 50=37.97%, 100=60.10% 00:19:43.334 lat (msec) : 250=1.27% 00:19:43.334 cpu : usr=39.03%, sys=2.07%, ctx=1104, majf=0, minf=9 00:19:43.334 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:43.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 issued rwts: total=2842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.334 filename0: (groupid=0, jobs=1): err= 0: pid=82783: Mon Jul 15 21:33:14 2024 00:19:43.334 read: IOPS=270, BW=1082KiB/s (1108kB/s)(10.6MiB/10030msec) 00:19:43.334 slat (usec): min=6, max=8036, avg=29.89, stdev=259.32 00:19:43.334 clat (msec): min=12, max=129, avg=58.96, stdev=15.41 00:19:43.334 lat (msec): min=12, max=129, avg=58.99, stdev=15.41 00:19:43.334 clat percentiles (msec): 00:19:43.334 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 46], 00:19:43.334 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 62], 00:19:43.334 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 87], 00:19:43.334 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 118], 99.95th=[ 129], 00:19:43.334 | 99.99th=[ 130] 00:19:43.334 bw ( KiB/s): min= 992, max= 1264, per=4.17%, avg=1081.60, stdev=74.71, samples=20 00:19:43.334 iops : min= 248, max= 316, avg=270.40, stdev=18.68, samples=20 00:19:43.334 lat (msec) : 20=0.59%, 50=27.72%, 100=71.07%, 250=0.63% 00:19:43.334 cpu : usr=40.14%, sys=2.03%, ctx=1162, majf=0, minf=9 00:19:43.334 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:43.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 issued rwts: total=2713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.334 filename0: (groupid=0, jobs=1): err= 0: pid=82784: Mon Jul 15 21:33:14 2024 00:19:43.334 read: IOPS=282, BW=1130KiB/s (1157kB/s)(11.0MiB/10004msec) 00:19:43.334 slat (nsec): min=3017, max=60050, avg=16517.08, stdev=7577.17 00:19:43.334 clat (msec): min=3, max=138, avg=56.57, stdev=16.35 00:19:43.334 lat (msec): min=3, max=138, avg=56.58, stdev=16.35 00:19:43.334 clat percentiles (msec): 00:19:43.334 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 43], 00:19:43.334 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 00:19:43.334 | 70.00th=[ 63], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 87], 00:19:43.334 | 99.00th=[ 104], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 140], 00:19:43.334 | 99.99th=[ 140] 00:19:43.334 bw ( KiB/s): min= 929, max= 1272, per=4.32%, avg=1120.05, stdev=75.33, samples=19 00:19:43.334 iops : min= 232, max= 318, avg=280.00, stdev=18.87, samples=19 00:19:43.334 lat (msec) : 4=0.21%, 20=0.11%, 50=38.89%, 100=59.41%, 250=1.38% 00:19:43.334 cpu : usr=32.05%, sys=1.88%, ctx=933, majf=0, minf=9 00:19:43.334 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:43.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 issued rwts: total=2826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.334 filename0: (groupid=0, jobs=1): err= 0: pid=82785: Mon Jul 15 21:33:14 2024 00:19:43.334 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10031msec) 00:19:43.334 slat (usec): min=3, max=8027, avg=24.40, stdev=264.14 00:19:43.334 clat (msec): min=23, max=106, avg=58.02, stdev=15.31 00:19:43.334 lat (msec): min=23, max=106, avg=58.04, stdev=15.31 00:19:43.334 clat percentiles (msec): 00:19:43.334 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 00:19:43.334 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:19:43.334 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 88], 00:19:43.334 | 99.00th=[ 99], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:19:43.334 | 99.99th=[ 107] 00:19:43.334 bw ( KiB/s): min= 992, max= 1320, per=4.23%, avg=1097.20, stdev=79.34, samples=20 00:19:43.334 iops : min= 248, max= 330, avg=274.30, stdev=19.83, samples=20 00:19:43.334 lat (msec) : 50=33.09%, 100=66.26%, 250=0.65% 00:19:43.334 cpu : usr=34.73%, sys=1.99%, ctx=1159, majf=0, minf=9 00:19:43.334 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:43.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.334 issued rwts: total=2759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.334 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.334 filename0: (groupid=0, jobs=1): err= 0: pid=82786: Mon Jul 15 21:33:14 2024 00:19:43.334 read: IOPS=282, BW=1131KiB/s (1159kB/s)(11.1MiB/10005msec) 00:19:43.334 slat (usec): min=3, max=8050, avg=25.99, stdev=238.71 00:19:43.334 clat (msec): min=4, max=135, avg=56.45, stdev=16.69 00:19:43.334 lat (msec): min=4, max=135, avg=56.47, stdev=16.69 00:19:43.334 clat percentiles (msec): 00:19:43.334 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:19:43.334 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 61], 00:19:43.334 | 70.00th=[ 63], 80.00th=[ 68], 90.00th=[ 79], 95.00th=[ 89], 00:19:43.334 | 99.00th=[ 103], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 136], 00:19:43.334 | 99.99th=[ 136] 00:19:43.334 bw ( KiB/s): min= 992, max= 1224, per=4.32%, avg=1121.68, stdev=72.47, samples=19 00:19:43.334 iops : min= 248, max= 306, avg=280.42, stdev=18.12, samples=19 00:19:43.334 lat (msec) : 10=0.21%, 20=0.11%, 50=38.09%, 100=60.14%, 250=1.45% 00:19:43.334 cpu : usr=41.22%, sys=1.88%, ctx=1282, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename0: (groupid=0, jobs=1): err= 0: pid=82787: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=268, BW=1072KiB/s (1098kB/s)(10.5MiB/10010msec) 00:19:43.335 slat (usec): min=3, max=8069, avg=35.77, stdev=389.29 00:19:43.335 clat (msec): min=25, max=118, avg=59.52, stdev=16.18 00:19:43.335 lat (msec): min=25, max=118, avg=59.55, stdev=16.19 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 00:19:43.335 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:19:43.335 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 92], 00:19:43.335 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 120], 00:19:43.335 | 99.99th=[ 120] 00:19:43.335 bw ( KiB/s): min= 878, max= 1184, per=4.11%, avg=1065.42, stdev=82.63, samples=19 00:19:43.335 iops : min= 219, max= 296, avg=266.32, stdev=20.72, samples=19 00:19:43.335 lat (msec) : 50=29.48%, 100=68.39%, 250=2.12% 00:19:43.335 cpu : usr=33.47%, sys=2.05%, ctx=953, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename1: (groupid=0, jobs=1): err= 0: pid=82788: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10009msec) 00:19:43.335 slat (usec): min=3, max=7119, avg=30.49, stdev=221.67 00:19:43.335 clat (msec): min=23, max=120, avg=59.05, stdev=15.74 00:19:43.335 lat (msec): min=23, max=120, avg=59.08, stdev=15.73 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 44], 00:19:43.335 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:19:43.335 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 88], 00:19:43.335 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 112], 99.95th=[ 121], 00:19:43.335 | 99.99th=[ 121] 00:19:43.335 bw ( KiB/s): min= 881, max= 1208, per=4.15%, avg=1076.53, stdev=104.75, samples=19 00:19:43.335 iops : min= 220, max= 302, avg=269.11, stdev=26.21, samples=19 00:19:43.335 lat (msec) : 50=29.77%, 100=69.27%, 250=0.96% 00:19:43.335 cpu : usr=45.39%, sys=1.42%, ctx=1484, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename1: (groupid=0, jobs=1): err= 0: pid=82789: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=254, BW=1019KiB/s (1044kB/s)(10.0MiB/10045msec) 00:19:43.335 slat (usec): min=4, max=7037, avg=19.02, stdev=160.26 00:19:43.335 clat (msec): min=5, max=108, avg=62.64, stdev=17.24 00:19:43.335 lat (msec): min=5, max=108, avg=62.66, stdev=17.25 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 51], 00:19:43.335 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:19:43.335 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 90], 00:19:43.335 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 109], 00:19:43.335 | 99.99th=[ 109] 00:19:43.335 bw ( KiB/s): min= 890, max= 1648, per=3.92%, avg=1017.05, stdev=170.10, samples=20 00:19:43.335 iops : min= 222, max= 412, avg=254.20, stdev=42.54, samples=20 00:19:43.335 lat (msec) : 10=1.88%, 20=0.55%, 50=18.09%, 100=77.58%, 250=1.91% 00:19:43.335 cpu : usr=34.50%, sys=1.99%, ctx=1164, majf=0, minf=0 00:19:43.335 IO depths : 1=0.1%, 2=3.0%, 4=12.1%, 8=69.8%, 16=14.9%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=91.0%, 8=6.3%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename1: (groupid=0, jobs=1): err= 0: pid=82790: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=268, BW=1074KiB/s (1100kB/s)(10.5MiB/10026msec) 00:19:43.335 slat (usec): min=5, max=5031, avg=23.99, stdev=181.52 00:19:43.335 clat (msec): min=26, max=109, avg=59.44, stdev=15.04 00:19:43.335 lat (msec): min=26, max=109, avg=59.46, stdev=15.03 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 46], 00:19:43.335 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 00:19:43.335 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 88], 00:19:43.335 | 99.00th=[ 99], 99.50th=[ 103], 99.90th=[ 108], 99.95th=[ 109], 00:19:43.335 | 99.99th=[ 110] 00:19:43.335 bw ( KiB/s): min= 892, max= 1192, per=4.13%, avg=1071.80, stdev=89.92, samples=20 00:19:43.335 iops : min= 223, max= 298, avg=267.95, stdev=22.48, samples=20 00:19:43.335 lat (msec) : 50=27.01%, 100=72.25%, 250=0.74% 00:19:43.335 cpu : usr=42.27%, sys=2.68%, ctx=1370, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename1: (groupid=0, jobs=1): err= 0: pid=82791: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=272, BW=1091KiB/s (1117kB/s)(10.7MiB/10027msec) 00:19:43.335 slat (usec): min=6, max=6987, avg=22.65, stdev=188.35 00:19:43.335 clat (msec): min=12, max=107, avg=58.50, stdev=15.33 00:19:43.335 lat (msec): min=12, max=107, avg=58.52, stdev=15.34 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 45], 00:19:43.335 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:19:43.335 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 87], 00:19:43.335 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 105], 99.95th=[ 108], 00:19:43.335 | 99.99th=[ 108] 00:19:43.335 bw ( KiB/s): min= 912, max= 1264, per=4.20%, avg=1090.00, stdev=88.81, samples=20 00:19:43.335 iops : min= 228, max= 316, avg=272.50, stdev=22.20, samples=20 00:19:43.335 lat (msec) : 20=0.59%, 50=29.58%, 100=68.92%, 250=0.91% 00:19:43.335 cpu : usr=41.29%, sys=2.30%, ctx=1329, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename1: (groupid=0, jobs=1): err= 0: pid=82792: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=275, BW=1104KiB/s (1130kB/s)(10.8MiB/10005msec) 00:19:43.335 slat (usec): min=6, max=8054, avg=40.10, stdev=425.05 00:19:43.335 clat (msec): min=10, max=128, avg=57.79, stdev=16.33 00:19:43.335 lat (msec): min=10, max=128, avg=57.83, stdev=16.33 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 45], 00:19:43.335 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:19:43.335 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 90], 00:19:43.335 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 120], 99.95th=[ 120], 00:19:43.335 | 99.99th=[ 129] 00:19:43.335 bw ( KiB/s): min= 896, max= 1232, per=4.22%, avg=1093.89, stdev=84.11, samples=19 00:19:43.335 iops : min= 224, max= 308, avg=273.47, stdev=21.03, samples=19 00:19:43.335 lat (msec) : 20=0.11%, 50=34.63%, 100=63.60%, 250=1.67% 00:19:43.335 cpu : usr=31.20%, sys=1.97%, ctx=884, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename1: (groupid=0, jobs=1): err= 0: pid=82793: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=260, BW=1042KiB/s (1067kB/s)(10.2MiB/10031msec) 00:19:43.335 slat (usec): min=3, max=13030, avg=37.20, stdev=464.68 00:19:43.335 clat (msec): min=25, max=127, avg=61.23, stdev=15.13 00:19:43.335 lat (msec): min=25, max=127, avg=61.27, stdev=15.14 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:19:43.335 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:19:43.335 | 70.00th=[ 68], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 90], 00:19:43.335 | 99.00th=[ 101], 99.50th=[ 106], 99.90th=[ 115], 99.95th=[ 120], 00:19:43.335 | 99.99th=[ 128] 00:19:43.335 bw ( KiB/s): min= 912, max= 1138, per=4.01%, avg=1040.90, stdev=63.89, samples=20 00:19:43.335 iops : min= 228, max= 284, avg=260.20, stdev=15.93, samples=20 00:19:43.335 lat (msec) : 50=24.16%, 100=74.92%, 250=0.92% 00:19:43.335 cpu : usr=31.53%, sys=1.68%, ctx=885, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=78.8%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 complete : 0=0.0%, 4=88.8%, 8=10.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.335 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.335 filename1: (groupid=0, jobs=1): err= 0: pid=82794: Mon Jul 15 21:33:14 2024 00:19:43.335 read: IOPS=275, BW=1102KiB/s (1129kB/s)(10.8MiB/10041msec) 00:19:43.335 slat (usec): min=3, max=8023, avg=23.22, stdev=207.00 00:19:43.335 clat (msec): min=7, max=128, avg=57.89, stdev=16.39 00:19:43.335 lat (msec): min=7, max=128, avg=57.91, stdev=16.39 00:19:43.335 clat percentiles (msec): 00:19:43.335 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 45], 00:19:43.335 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:19:43.335 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 87], 00:19:43.335 | 99.00th=[ 101], 99.50th=[ 104], 99.90th=[ 105], 99.95th=[ 105], 00:19:43.335 | 99.99th=[ 129] 00:19:43.335 bw ( KiB/s): min= 912, max= 1477, per=4.24%, avg=1100.25, stdev=125.58, samples=20 00:19:43.335 iops : min= 228, max= 369, avg=275.05, stdev=31.35, samples=20 00:19:43.335 lat (msec) : 10=1.16%, 20=0.51%, 50=30.36%, 100=67.26%, 250=0.72% 00:19:43.335 cpu : usr=40.04%, sys=2.08%, ctx=1212, majf=0, minf=9 00:19:43.335 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.1%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:43.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename1: (groupid=0, jobs=1): err= 0: pid=82795: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=273, BW=1094KiB/s (1120kB/s)(10.7MiB/10006msec) 00:19:43.336 slat (usec): min=3, max=8042, avg=24.15, stdev=229.95 00:19:43.336 clat (msec): min=25, max=138, avg=58.38, stdev=16.46 00:19:43.336 lat (msec): min=25, max=138, avg=58.40, stdev=16.46 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 46], 00:19:43.336 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:19:43.336 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 82], 95.00th=[ 91], 00:19:43.336 | 99.00th=[ 107], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 140], 00:19:43.336 | 99.99th=[ 140] 00:19:43.336 bw ( KiB/s): min= 880, max= 1256, per=4.18%, avg=1084.89, stdev=101.77, samples=19 00:19:43.336 iops : min= 220, max= 314, avg=271.21, stdev=25.44, samples=19 00:19:43.336 lat (msec) : 50=32.66%, 100=65.84%, 250=1.50% 00:19:43.336 cpu : usr=32.05%, sys=2.23%, ctx=976, majf=0, minf=9 00:19:43.336 IO depths : 1=0.1%, 2=0.7%, 4=2.4%, 8=81.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82796: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=275, BW=1103KiB/s (1130kB/s)(10.8MiB/10011msec) 00:19:43.336 slat (usec): min=3, max=7039, avg=28.42, stdev=244.55 00:19:43.336 clat (msec): min=10, max=107, avg=57.90, stdev=15.47 00:19:43.336 lat (msec): min=10, max=107, avg=57.92, stdev=15.48 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 45], 00:19:43.336 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:19:43.336 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 80], 95.00th=[ 89], 00:19:43.336 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 108], 99.95th=[ 108], 00:19:43.336 | 99.99th=[ 108] 00:19:43.336 bw ( KiB/s): min= 896, max= 1248, per=4.22%, avg=1094.16, stdev=103.01, samples=19 00:19:43.336 iops : min= 224, max= 312, avg=273.53, stdev=25.75, samples=19 00:19:43.336 lat (msec) : 20=0.11%, 50=34.41%, 100=65.09%, 250=0.40% 00:19:43.336 cpu : usr=40.97%, sys=1.75%, ctx=1393, majf=0, minf=9 00:19:43.336 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82797: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=276, BW=1105KiB/s (1132kB/s)(10.8MiB/10013msec) 00:19:43.336 slat (usec): min=4, max=8020, avg=20.83, stdev=182.47 00:19:43.336 clat (msec): min=23, max=128, avg=57.79, stdev=15.86 00:19:43.336 lat (msec): min=23, max=128, avg=57.81, stdev=15.87 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 44], 00:19:43.336 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:19:43.336 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 89], 00:19:43.336 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 106], 99.95th=[ 129], 00:19:43.336 | 99.99th=[ 129] 00:19:43.336 bw ( KiB/s): min= 960, max= 1256, per=4.25%, avg=1103.05, stdev=85.74, samples=20 00:19:43.336 iops : min= 240, max= 314, avg=275.75, stdev=21.42, samples=20 00:19:43.336 lat (msec) : 50=32.63%, 100=66.39%, 250=0.98% 00:19:43.336 cpu : usr=39.86%, sys=2.41%, ctx=1273, majf=0, minf=9 00:19:43.336 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.0%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82798: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=274, BW=1098KiB/s (1125kB/s)(10.8MiB/10042msec) 00:19:43.336 slat (usec): min=6, max=8032, avg=29.90, stdev=305.44 00:19:43.336 clat (msec): min=5, max=108, avg=58.10, stdev=17.00 00:19:43.336 lat (msec): min=5, max=108, avg=58.13, stdev=17.00 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 6], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 47], 00:19:43.336 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:19:43.336 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 88], 00:19:43.336 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 109], 99.95th=[ 109], 00:19:43.336 | 99.99th=[ 109] 00:19:43.336 bw ( KiB/s): min= 960, max= 1696, per=4.22%, avg=1095.80, stdev=160.89, samples=20 00:19:43.336 iops : min= 240, max= 424, avg=273.90, stdev=40.21, samples=20 00:19:43.336 lat (msec) : 10=1.74%, 20=0.58%, 50=27.38%, 100=69.21%, 250=1.09% 00:19:43.336 cpu : usr=37.67%, sys=2.20%, ctx=1163, majf=0, minf=0 00:19:43.336 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.6%, 16=16.9%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=88.1%, 8=11.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82799: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=260, BW=1041KiB/s (1066kB/s)(10.2MiB/10002msec) 00:19:43.336 slat (usec): min=4, max=8024, avg=19.74, stdev=157.15 00:19:43.336 clat (msec): min=3, max=132, avg=61.39, stdev=16.46 00:19:43.336 lat (msec): min=3, max=132, avg=61.41, stdev=16.46 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 47], 00:19:43.336 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:19:43.336 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 92], 00:19:43.336 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 133], 00:19:43.336 | 99.99th=[ 133] 00:19:43.336 bw ( KiB/s): min= 769, max= 1200, per=3.97%, avg=1029.11, stdev=109.73, samples=19 00:19:43.336 iops : min= 192, max= 300, avg=257.26, stdev=27.46, samples=19 00:19:43.336 lat (msec) : 4=0.23%, 10=0.12%, 50=26.47%, 100=70.88%, 250=2.31% 00:19:43.336 cpu : usr=31.60%, sys=1.60%, ctx=884, majf=0, minf=9 00:19:43.336 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=76.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82800: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=271, BW=1085KiB/s (1111kB/s)(10.6MiB/10006msec) 00:19:43.336 slat (usec): min=3, max=8024, avg=22.34, stdev=188.62 00:19:43.336 clat (msec): min=10, max=108, avg=58.90, stdev=15.37 00:19:43.336 lat (msec): min=10, max=108, avg=58.92, stdev=15.37 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:19:43.336 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:19:43.336 | 70.00th=[ 66], 80.00th=[ 70], 90.00th=[ 82], 95.00th=[ 89], 00:19:43.336 | 99.00th=[ 97], 99.50th=[ 104], 99.90th=[ 109], 99.95th=[ 109], 00:19:43.336 | 99.99th=[ 109] 00:19:43.336 bw ( KiB/s): min= 897, max= 1216, per=4.15%, avg=1076.53, stdev=78.46, samples=19 00:19:43.336 iops : min= 224, max= 304, avg=269.11, stdev=19.65, samples=19 00:19:43.336 lat (msec) : 20=0.26%, 50=31.28%, 100=67.76%, 250=0.70% 00:19:43.336 cpu : usr=34.86%, sys=2.13%, ctx=956, majf=0, minf=9 00:19:43.336 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82801: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=271, BW=1085KiB/s (1111kB/s)(10.6MiB/10031msec) 00:19:43.336 slat (usec): min=3, max=11030, avg=27.48, stdev=312.84 00:19:43.336 clat (msec): min=25, max=114, avg=58.82, stdev=15.14 00:19:43.336 lat (msec): min=25, max=114, avg=58.84, stdev=15.14 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 47], 00:19:43.336 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:19:43.336 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 89], 00:19:43.336 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 111], 99.95th=[ 111], 00:19:43.336 | 99.99th=[ 115] 00:19:43.336 bw ( KiB/s): min= 968, max= 1224, per=4.17%, avg=1081.70, stdev=71.59, samples=20 00:19:43.336 iops : min= 242, max= 306, avg=270.40, stdev=17.92, samples=20 00:19:43.336 lat (msec) : 50=31.21%, 100=68.31%, 250=0.48% 00:19:43.336 cpu : usr=33.79%, sys=2.20%, ctx=1098, majf=0, minf=9 00:19:43.336 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82802: Mon Jul 15 21:33:14 2024 00:19:43.336 read: IOPS=269, BW=1079KiB/s (1105kB/s)(10.5MiB/10008msec) 00:19:43.336 slat (usec): min=3, max=8048, avg=30.25, stdev=318.15 00:19:43.336 clat (msec): min=10, max=125, avg=59.15, stdev=17.32 00:19:43.336 lat (msec): min=10, max=125, avg=59.18, stdev=17.31 00:19:43.336 clat percentiles (msec): 00:19:43.336 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 44], 00:19:43.336 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:19:43.336 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 95], 00:19:43.336 | 99.00th=[ 106], 99.50th=[ 126], 99.90th=[ 127], 99.95th=[ 127], 00:19:43.336 | 99.99th=[ 127] 00:19:43.336 bw ( KiB/s): min= 672, max= 1224, per=4.12%, avg=1067.63, stdev=144.59, samples=19 00:19:43.336 iops : min= 168, max= 306, avg=266.89, stdev=36.14, samples=19 00:19:43.336 lat (msec) : 20=0.11%, 50=32.60%, 100=65.02%, 250=2.26% 00:19:43.336 cpu : usr=38.35%, sys=2.00%, ctx=1063, majf=0, minf=9 00:19:43.336 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:43.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.336 issued rwts: total=2699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.336 filename2: (groupid=0, jobs=1): err= 0: pid=82803: Mon Jul 15 21:33:14 2024 00:19:43.337 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.1MiB/10037msec) 00:19:43.337 slat (usec): min=6, max=4036, avg=19.24, stdev=112.19 00:19:43.337 clat (msec): min=12, max=131, avg=62.21, stdev=16.02 00:19:43.337 lat (msec): min=12, max=131, avg=62.23, stdev=16.02 00:19:43.337 clat percentiles (msec): 00:19:43.337 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 50], 00:19:43.337 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:19:43.337 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 93], 00:19:43.337 | 99.00th=[ 108], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 131], 00:19:43.337 | 99.99th=[ 131] 00:19:43.337 bw ( KiB/s): min= 768, max= 1208, per=3.95%, avg=1023.60, stdev=96.58, samples=20 00:19:43.337 iops : min= 192, max= 302, avg=255.90, stdev=24.14, samples=20 00:19:43.337 lat (msec) : 20=0.62%, 50=21.01%, 100=76.16%, 250=2.21% 00:19:43.337 cpu : usr=32.12%, sys=1.80%, ctx=947, majf=0, minf=9 00:19:43.337 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=76.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:43.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.337 complete : 0=0.0%, 4=89.5%, 8=9.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:43.337 issued rwts: total=2575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:43.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:43.337 00:19:43.337 Run status group 0 (all jobs): 00:19:43.337 READ: bw=25.3MiB/s (26.5MB/s), 1019KiB/s-1137KiB/s (1044kB/s-1164kB/s), io=255MiB (267MB), run=10001-10058msec 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 bdev_null0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 [2024-07-15 21:33:14.994992] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:43.337 21:33:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 bdev_null1 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.337 { 00:19:43.337 "params": { 00:19:43.337 "name": "Nvme$subsystem", 00:19:43.337 "trtype": "$TEST_TRANSPORT", 00:19:43.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.337 "adrfam": "ipv4", 00:19:43.337 "trsvcid": "$NVMF_PORT", 00:19:43.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.337 "hdgst": ${hdgst:-false}, 00:19:43.337 "ddgst": ${ddgst:-false} 00:19:43.337 }, 00:19:43.337 "method": "bdev_nvme_attach_controller" 00:19:43.337 } 00:19:43.337 EOF 00:19:43.337 )") 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:43.337 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.338 { 00:19:43.338 "params": { 00:19:43.338 "name": "Nvme$subsystem", 00:19:43.338 "trtype": "$TEST_TRANSPORT", 00:19:43.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.338 "adrfam": "ipv4", 00:19:43.338 "trsvcid": "$NVMF_PORT", 00:19:43.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.338 "hdgst": ${hdgst:-false}, 00:19:43.338 "ddgst": ${ddgst:-false} 00:19:43.338 }, 00:19:43.338 "method": "bdev_nvme_attach_controller" 00:19:43.338 } 00:19:43.338 EOF 00:19:43.338 )") 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:43.338 "params": { 00:19:43.338 "name": "Nvme0", 00:19:43.338 "trtype": "tcp", 00:19:43.338 "traddr": "10.0.0.2", 00:19:43.338 "adrfam": "ipv4", 00:19:43.338 "trsvcid": "4420", 00:19:43.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:43.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:43.338 "hdgst": false, 00:19:43.338 "ddgst": false 00:19:43.338 }, 00:19:43.338 "method": "bdev_nvme_attach_controller" 00:19:43.338 },{ 00:19:43.338 "params": { 00:19:43.338 "name": "Nvme1", 00:19:43.338 "trtype": "tcp", 00:19:43.338 "traddr": "10.0.0.2", 00:19:43.338 "adrfam": "ipv4", 00:19:43.338 "trsvcid": "4420", 00:19:43.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.338 "hdgst": false, 00:19:43.338 "ddgst": false 00:19:43.338 }, 00:19:43.338 "method": "bdev_nvme_attach_controller" 00:19:43.338 }' 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:43.338 21:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:43.338 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:43.338 ... 00:19:43.338 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:43.338 ... 00:19:43.338 fio-3.35 00:19:43.338 Starting 4 threads 00:19:48.608 00:19:48.609 filename0: (groupid=0, jobs=1): err= 0: pid=82960: Mon Jul 15 21:33:20 2024 00:19:48.609 read: IOPS=2932, BW=22.9MiB/s (24.0MB/s)(115MiB/5001msec) 00:19:48.609 slat (nsec): min=5786, max=60201, avg=11383.80, stdev=6622.58 00:19:48.609 clat (usec): min=369, max=4913, avg=2699.31, stdev=804.17 00:19:48.609 lat (usec): min=379, max=4929, avg=2710.69, stdev=804.91 00:19:48.609 clat percentiles (usec): 00:19:48.609 | 1.00th=[ 988], 5.00th=[ 1516], 10.00th=[ 1598], 20.00th=[ 1729], 00:19:48.609 | 30.00th=[ 2073], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 3032], 00:19:48.609 | 70.00th=[ 3228], 80.00th=[ 3392], 90.00th=[ 3621], 95.00th=[ 3851], 00:19:48.609 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 4752], 99.95th=[ 4817], 00:19:48.609 | 99.99th=[ 4883] 00:19:48.609 bw ( KiB/s): min=18000, max=25504, per=27.99%, avg=23256.44, stdev=2476.02, samples=9 00:19:48.609 iops : min= 2250, max= 3188, avg=2906.89, stdev=309.39, samples=9 00:19:48.609 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.04% 00:19:48.609 lat (msec) : 2=26.10%, 4=70.07%, 10=2.76% 00:19:48.609 cpu : usr=91.70%, sys=7.44%, ctx=15, majf=0, minf=0 00:19:48.609 IO depths : 1=0.1%, 2=3.8%, 4=61.7%, 8=34.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 issued rwts: total=14663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:48.609 filename0: (groupid=0, jobs=1): err= 0: pid=82961: Mon Jul 15 21:33:20 2024 00:19:48.609 read: IOPS=3049, BW=23.8MiB/s (25.0MB/s)(119MiB/5001msec) 00:19:48.609 slat (nsec): min=5969, max=72746, avg=12004.47, stdev=6727.66 00:19:48.609 clat (usec): min=620, max=5648, avg=2593.96, stdev=760.49 00:19:48.609 lat (usec): min=633, max=5655, avg=2605.97, stdev=761.15 00:19:48.609 clat percentiles (usec): 00:19:48.609 | 1.00th=[ 996], 5.00th=[ 1483], 10.00th=[ 1631], 20.00th=[ 1729], 00:19:48.609 | 30.00th=[ 1975], 40.00th=[ 2442], 50.00th=[ 2835], 60.00th=[ 2933], 00:19:48.609 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3490], 95.00th=[ 3687], 00:19:48.609 | 99.00th=[ 3949], 99.50th=[ 4047], 99.90th=[ 4359], 99.95th=[ 4555], 00:19:48.609 | 99.99th=[ 5473] 00:19:48.609 bw ( KiB/s): min=22736, max=26112, per=29.24%, avg=24294.78, stdev=1175.09, samples=9 00:19:48.609 iops : min= 2842, max= 3264, avg=3036.78, stdev=146.88, samples=9 00:19:48.609 lat (usec) : 750=0.21%, 1000=0.82% 00:19:48.609 lat (msec) : 2=29.64%, 4=68.61%, 10=0.72% 00:19:48.609 cpu : usr=91.18%, sys=7.70%, ctx=12, majf=0, minf=0 00:19:48.609 IO depths : 1=0.1%, 2=1.2%, 4=63.1%, 8=35.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 complete : 0=0.0%, 4=99.5%, 8=0.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 issued rwts: total=15250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:48.609 filename1: (groupid=0, jobs=1): err= 0: pid=82962: Mon Jul 15 21:33:20 2024 00:19:48.609 read: IOPS=2233, BW=17.5MiB/s (18.3MB/s)(87.3MiB/5003msec) 00:19:48.609 slat (nsec): min=5802, max=90905, avg=22224.71, stdev=12464.35 00:19:48.609 clat (usec): min=1038, max=6125, avg=3494.98, stdev=646.64 00:19:48.609 lat (usec): min=1062, max=6142, avg=3517.20, stdev=647.35 00:19:48.609 clat percentiles (usec): 00:19:48.609 | 1.00th=[ 1713], 5.00th=[ 2057], 10.00th=[ 2409], 20.00th=[ 3195], 00:19:48.609 | 30.00th=[ 3326], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3687], 00:19:48.609 | 70.00th=[ 3818], 80.00th=[ 4047], 90.00th=[ 4228], 95.00th=[ 4359], 00:19:48.609 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 5473], 99.95th=[ 5669], 00:19:48.609 | 99.99th=[ 5866] 00:19:48.609 bw ( KiB/s): min=16144, max=20976, per=21.68%, avg=18016.22, stdev=1501.52, samples=9 00:19:48.609 iops : min= 2018, max= 2622, avg=2252.00, stdev=187.70, samples=9 00:19:48.609 lat (msec) : 2=3.99%, 4=74.12%, 10=21.89% 00:19:48.609 cpu : usr=96.16%, sys=3.22%, ctx=6, majf=0, minf=0 00:19:48.609 IO depths : 1=2.7%, 2=18.5%, 4=53.6%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 issued rwts: total=11176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:48.609 filename1: (groupid=0, jobs=1): err= 0: pid=82963: Mon Jul 15 21:33:20 2024 00:19:48.609 read: IOPS=2175, BW=17.0MiB/s (17.8MB/s)(85.0MiB/5001msec) 00:19:48.609 slat (nsec): min=5796, max=83673, avg=21984.88, stdev=12434.78 00:19:48.609 clat (usec): min=1052, max=5724, avg=3589.08, stdev=594.82 00:19:48.609 lat (usec): min=1065, max=5734, avg=3611.06, stdev=594.86 00:19:48.609 clat percentiles (usec): 00:19:48.609 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2966], 20.00th=[ 3261], 00:19:48.609 | 30.00th=[ 3392], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3752], 00:19:48.609 | 70.00th=[ 3884], 80.00th=[ 4080], 90.00th=[ 4293], 95.00th=[ 4424], 00:19:48.609 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5342], 00:19:48.609 | 99.99th=[ 5538] 00:19:48.609 bw ( KiB/s): min=16256, max=20576, per=21.04%, avg=17484.22, stdev=1357.96, samples=9 00:19:48.609 iops : min= 2032, max= 2572, avg=2185.44, stdev=169.81, samples=9 00:19:48.609 lat (msec) : 2=2.67%, 4=72.07%, 10=25.26% 00:19:48.609 cpu : usr=96.30%, sys=3.06%, ctx=48, majf=0, minf=10 00:19:48.609 IO depths : 1=2.9%, 2=20.6%, 4=52.5%, 8=24.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:48.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.609 issued rwts: total=10878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:48.609 00:19:48.609 Run status group 0 (all jobs): 00:19:48.609 READ: bw=81.1MiB/s (85.1MB/s), 17.0MiB/s-23.8MiB/s (17.8MB/s-25.0MB/s), io=406MiB (426MB), run=5001-5003msec 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:48.609 ************************************ 00:19:48.609 END TEST fio_dif_rand_params 00:19:48.609 ************************************ 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.609 00:19:48.609 real 0m23.732s 00:19:48.609 user 2m4.192s 00:19:48.609 sys 0m7.788s 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:48.609 21:33:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:48.609 21:33:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:19:48.609 21:33:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:48.609 21:33:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:48.609 21:33:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.609 21:33:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:48.609 ************************************ 00:19:48.609 START TEST fio_dif_digest 00:19:48.609 ************************************ 00:19:48.609 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:48.610 bdev_null0 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:48.610 [2024-07-15 21:33:21.275754] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.610 { 00:19:48.610 "params": { 00:19:48.610 "name": "Nvme$subsystem", 00:19:48.610 "trtype": "$TEST_TRANSPORT", 00:19:48.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.610 "adrfam": "ipv4", 00:19:48.610 "trsvcid": "$NVMF_PORT", 00:19:48.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.610 "hdgst": ${hdgst:-false}, 00:19:48.610 "ddgst": ${ddgst:-false} 00:19:48.610 }, 00:19:48.610 "method": "bdev_nvme_attach_controller" 00:19:48.610 } 00:19:48.610 EOF 00:19:48.610 )") 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:48.610 "params": { 00:19:48.610 "name": "Nvme0", 00:19:48.610 "trtype": "tcp", 00:19:48.610 "traddr": "10.0.0.2", 00:19:48.610 "adrfam": "ipv4", 00:19:48.610 "trsvcid": "4420", 00:19:48.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:48.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:48.610 "hdgst": true, 00:19:48.610 "ddgst": true 00:19:48.610 }, 00:19:48.610 "method": "bdev_nvme_attach_controller" 00:19:48.610 }' 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:48.610 21:33:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:48.610 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:48.610 ... 00:19:48.610 fio-3.35 00:19:48.610 Starting 3 threads 00:20:00.808 00:20:00.808 filename0: (groupid=0, jobs=1): err= 0: pid=83069: Mon Jul 15 21:33:32 2024 00:20:00.808 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(337MiB/10007msec) 00:20:00.808 slat (nsec): min=6288, max=38187, avg=10161.40, stdev=3964.36 00:20:00.808 clat (usec): min=10075, max=13873, avg=11117.24, stdev=555.23 00:20:00.808 lat (usec): min=10085, max=13892, avg=11127.40, stdev=555.90 00:20:00.808 clat percentiles (usec): 00:20:00.808 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10421], 00:20:00.808 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:20:00.808 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11600], 95.00th=[11731], 00:20:00.808 | 99.00th=[11863], 99.50th=[12256], 99.90th=[13829], 99.95th=[13829], 00:20:00.808 | 99.99th=[13829] 00:20:00.808 bw ( KiB/s): min=33024, max=37632, per=33.34%, avg=34468.53, stdev=1474.23, samples=19 00:20:00.808 iops : min= 258, max= 294, avg=269.26, stdev=11.53, samples=19 00:20:00.808 lat (msec) : 20=100.00% 00:20:00.808 cpu : usr=89.19%, sys=10.32%, ctx=17, majf=0, minf=0 00:20:00.808 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.808 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.808 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:00.808 filename0: (groupid=0, jobs=1): err= 0: pid=83070: Mon Jul 15 21:33:32 2024 00:20:00.808 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(337MiB/10006msec) 00:20:00.808 slat (nsec): min=6467, max=77939, avg=10682.22, stdev=4335.66 00:20:00.808 clat (usec): min=9598, max=14750, avg=11115.84, stdev=555.94 00:20:00.808 lat (usec): min=9606, max=14765, avg=11126.52, stdev=556.51 00:20:00.808 clat percentiles (usec): 00:20:00.808 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10421], 00:20:00.808 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:20:00.808 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:20:00.808 | 99.00th=[11994], 99.50th=[12256], 99.90th=[14746], 99.95th=[14746], 00:20:00.808 | 99.99th=[14746] 00:20:00.808 bw ( KiB/s): min=33024, max=37632, per=33.35%, avg=34475.58, stdev=1470.11, samples=19 00:20:00.808 iops : min= 258, max= 294, avg=269.32, stdev=11.50, samples=19 00:20:00.808 lat (msec) : 10=0.11%, 20=99.89% 00:20:00.808 cpu : usr=90.27%, sys=9.22%, ctx=16, majf=0, minf=0 00:20:00.808 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.808 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.808 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:00.808 filename0: (groupid=0, jobs=1): err= 0: pid=83071: Mon Jul 15 21:33:32 2024 00:20:00.808 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(337MiB/10006msec) 00:20:00.808 slat (nsec): min=6164, max=30421, avg=10020.18, stdev=3915.56 00:20:00.808 clat (usec): min=7773, max=14624, avg=11116.90, stdev=566.88 00:20:00.808 lat (usec): min=7780, max=14643, avg=11126.92, stdev=567.41 00:20:00.808 clat percentiles (usec): 00:20:00.808 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10421], 00:20:00.808 | 30.00th=[11076], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:20:00.808 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[11731], 00:20:00.808 | 99.00th=[12125], 99.50th=[12649], 99.90th=[14615], 99.95th=[14615], 00:20:00.808 | 99.99th=[14615] 00:20:00.808 bw ( KiB/s): min=32256, max=37632, per=33.35%, avg=34475.53, stdev=1399.58, samples=19 00:20:00.808 iops : min= 252, max= 294, avg=269.32, stdev=10.93, samples=19 00:20:00.808 lat (msec) : 10=0.11%, 20=99.89% 00:20:00.808 cpu : usr=89.42%, sys=10.10%, ctx=9, majf=0, minf=0 00:20:00.808 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.808 issued rwts: total=2694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.808 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:00.808 00:20:00.808 Run status group 0 (all jobs): 00:20:00.808 READ: bw=101MiB/s (106MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=1010MiB (1059MB), run=10006-10007msec 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.808 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:00.808 ************************************ 00:20:00.808 END TEST fio_dif_digest 00:20:00.808 ************************************ 00:20:00.809 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.809 00:20:00.809 real 0m11.015s 00:20:00.809 user 0m27.539s 00:20:00.809 sys 0m3.275s 00:20:00.809 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.809 21:33:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:00.809 21:33:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:00.809 21:33:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.809 rmmod nvme_tcp 00:20:00.809 rmmod nvme_fabrics 00:20:00.809 rmmod nvme_keyring 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82300 ']' 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82300 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 82300 ']' 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 82300 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82300 00:20:00.809 killing process with pid 82300 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82300' 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@967 -- # kill 82300 00:20:00.809 21:33:32 nvmf_dif -- common/autotest_common.sh@972 -- # wait 82300 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:00.809 21:33:32 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:00.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:00.809 Waiting for block devices as requested 00:20:00.809 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:00.809 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:00.809 21:33:33 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.809 21:33:33 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.809 21:33:33 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.809 21:33:33 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.809 21:33:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.809 21:33:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:00.809 21:33:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.809 21:33:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:00.809 00:20:00.809 real 1m0.587s 00:20:00.809 user 3m47.368s 00:20:00.809 sys 0m21.259s 00:20:00.809 21:33:33 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.809 21:33:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:00.809 ************************************ 00:20:00.809 END TEST nvmf_dif 00:20:00.809 ************************************ 00:20:00.809 21:33:33 -- common/autotest_common.sh@1142 -- # return 0 00:20:00.809 21:33:33 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:00.809 21:33:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:00.809 21:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.809 21:33:33 -- common/autotest_common.sh@10 -- # set +x 00:20:00.809 ************************************ 00:20:00.809 START TEST nvmf_abort_qd_sizes 00:20:00.809 ************************************ 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:00.809 * Looking for test storage... 00:20:00.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:00.809 Cannot find device "nvmf_tgt_br" 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.809 Cannot find device "nvmf_tgt_br2" 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:00.809 Cannot find device "nvmf_tgt_br" 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:00.809 Cannot find device "nvmf_tgt_br2" 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.809 21:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:20:00.809 00:20:00.809 --- 10.0.0.2 ping statistics --- 00:20:00.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.809 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:20:00.809 00:20:00.809 --- 10.0.0.3 ping statistics --- 00:20:00.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.809 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:00.809 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:01.131 00:20:01.131 --- 10.0.0.1 ping statistics --- 00:20:01.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.131 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:01.131 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.131 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:01.131 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:01.131 21:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:01.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:01.960 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:01.960 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83669 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83669 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 83669 ']' 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.960 21:33:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:01.960 [2024-07-15 21:33:35.300369] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:01.960 [2024-07-15 21:33:35.300453] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.220 [2024-07-15 21:33:35.459081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.220 [2024-07-15 21:33:35.560358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.220 [2024-07-15 21:33:35.560403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.220 [2024-07-15 21:33:35.560413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.220 [2024-07-15 21:33:35.560421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.220 [2024-07-15 21:33:35.560428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.220 [2024-07-15 21:33:35.560526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.220 [2024-07-15 21:33:35.560747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.220 [2024-07-15 21:33:35.561058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.220 [2024-07-15 21:33:35.561059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.480 [2024-07-15 21:33:35.603613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.050 21:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 ************************************ 00:20:03.050 START TEST spdk_target_abort 00:20:03.050 ************************************ 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 spdk_targetn1 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 [2024-07-15 21:33:36.338973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:03.050 [2024-07-15 21:33:36.379128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.050 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:03.051 21:33:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:06.340 Initializing NVMe Controllers 00:20:06.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:06.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:06.340 Initialization complete. Launching workers. 00:20:06.340 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12681, failed: 0 00:20:06.340 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1081, failed to submit 11600 00:20:06.340 success 784, unsuccess 297, failed 0 00:20:06.340 21:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:06.340 21:33:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:09.743 Initializing NVMe Controllers 00:20:09.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:09.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:09.743 Initialization complete. Launching workers. 00:20:09.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:20:09.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1168, failed to submit 7832 00:20:09.743 success 363, unsuccess 805, failed 0 00:20:09.743 21:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:09.743 21:33:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:13.019 Initializing NVMe Controllers 00:20:13.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:13.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:13.019 Initialization complete. Launching workers. 00:20:13.019 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34940, failed: 0 00:20:13.019 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2391, failed to submit 32549 00:20:13.019 success 552, unsuccess 1839, failed 0 00:20:13.019 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:13.019 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.019 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:13.019 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.019 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:13.019 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.019 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83669 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 83669 ']' 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 83669 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83669 00:20:13.655 killing process with pid 83669 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83669' 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 83669 00:20:13.655 21:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 83669 00:20:13.912 00:20:13.913 real 0m10.818s 00:20:13.913 user 0m42.291s 00:20:13.913 sys 0m2.776s 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:13.913 ************************************ 00:20:13.913 END TEST spdk_target_abort 00:20:13.913 ************************************ 00:20:13.913 21:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:13.913 21:33:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:13.913 21:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:13.913 21:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.913 21:33:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:13.913 ************************************ 00:20:13.913 START TEST kernel_target_abort 00:20:13.913 ************************************ 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:13.913 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:14.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:14.479 Waiting for block devices as requested 00:20:14.479 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:14.737 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:14.737 21:33:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:14.737 No valid GPT data, bailing 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:14.737 No valid GPT data, bailing 00:20:14.737 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:14.996 No valid GPT data, bailing 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:14.996 No valid GPT data, bailing 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 --hostid=b6f940fe-c85a-454e-b75c-95123b6e9f66 -a 10.0.0.1 -t tcp -s 4420 00:20:14.996 00:20:14.996 Discovery Log Number of Records 2, Generation counter 2 00:20:14.996 =====Discovery Log Entry 0====== 00:20:14.996 trtype: tcp 00:20:14.996 adrfam: ipv4 00:20:14.996 subtype: current discovery subsystem 00:20:14.996 treq: not specified, sq flow control disable supported 00:20:14.996 portid: 1 00:20:14.996 trsvcid: 4420 00:20:14.996 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:14.996 traddr: 10.0.0.1 00:20:14.996 eflags: none 00:20:14.996 sectype: none 00:20:14.996 =====Discovery Log Entry 1====== 00:20:14.996 trtype: tcp 00:20:14.996 adrfam: ipv4 00:20:14.996 subtype: nvme subsystem 00:20:14.996 treq: not specified, sq flow control disable supported 00:20:14.996 portid: 1 00:20:14.996 trsvcid: 4420 00:20:14.996 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:14.996 traddr: 10.0.0.1 00:20:14.996 eflags: none 00:20:14.996 sectype: none 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:14.996 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:14.997 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:14.997 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:14.997 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:14.997 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:14.997 21:33:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:18.297 Initializing NVMe Controllers 00:20:18.297 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:18.297 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:18.297 Initialization complete. Launching workers. 00:20:18.297 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37375, failed: 0 00:20:18.297 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37375, failed to submit 0 00:20:18.297 success 0, unsuccess 37375, failed 0 00:20:18.297 21:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:18.297 21:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:21.581 Initializing NVMe Controllers 00:20:21.581 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:21.581 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:21.581 Initialization complete. Launching workers. 00:20:21.581 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74425, failed: 0 00:20:21.581 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38001, failed to submit 36424 00:20:21.581 success 0, unsuccess 38001, failed 0 00:20:21.581 21:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:21.581 21:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:24.908 Initializing NVMe Controllers 00:20:24.908 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:24.908 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:24.908 Initialization complete. Launching workers. 00:20:24.908 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99695, failed: 0 00:20:24.908 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24992, failed to submit 74703 00:20:24.908 success 0, unsuccess 24992, failed 0 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:24.908 21:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:25.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:27.685 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:27.685 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:27.950 ************************************ 00:20:27.950 END TEST kernel_target_abort 00:20:27.950 ************************************ 00:20:27.950 00:20:27.950 real 0m13.940s 00:20:27.950 user 0m6.107s 00:20:27.950 sys 0m5.080s 00:20:27.950 21:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.950 21:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.950 rmmod nvme_tcp 00:20:27.950 rmmod nvme_fabrics 00:20:27.950 rmmod nvme_keyring 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83669 ']' 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83669 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 83669 ']' 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 83669 00:20:27.950 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (83669) - No such process 00:20:27.950 Process with pid 83669 is not found 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 83669 is not found' 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:27.950 21:34:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:28.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.512 Waiting for block devices as requested 00:20:28.512 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:28.768 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.768 21:34:02 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:29.024 00:20:29.024 real 0m28.563s 00:20:29.024 user 0m49.628s 00:20:29.024 sys 0m9.728s 00:20:29.024 ************************************ 00:20:29.024 END TEST nvmf_abort_qd_sizes 00:20:29.024 ************************************ 00:20:29.024 21:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.024 21:34:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:29.024 21:34:02 -- common/autotest_common.sh@1142 -- # return 0 00:20:29.024 21:34:02 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:29.024 21:34:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:29.024 21:34:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.024 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:20:29.024 ************************************ 00:20:29.024 START TEST keyring_file 00:20:29.024 ************************************ 00:20:29.024 21:34:02 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:29.024 * Looking for test storage... 00:20:29.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:29.024 21:34:02 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:29.024 21:34:02 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.024 21:34:02 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.024 21:34:02 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.024 21:34:02 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.024 21:34:02 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.024 21:34:02 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.025 21:34:02 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.025 21:34:02 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.025 21:34:02 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:29.025 21:34:02 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@47 -- # : 0 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:29.025 21:34:02 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:29.025 21:34:02 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:29.025 21:34:02 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:29.025 21:34:02 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:29.025 21:34:02 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:29.025 21:34:02 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Tug50sblky 00:20:29.025 21:34:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:29.025 21:34:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Tug50sblky 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Tug50sblky 00:20:29.282 21:34:02 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Tug50sblky 00:20:29.282 21:34:02 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.d8Vs1wH54l 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:29.282 21:34:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d8Vs1wH54l 00:20:29.282 21:34:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.d8Vs1wH54l 00:20:29.282 21:34:02 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.d8Vs1wH54l 00:20:29.282 21:34:02 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.282 21:34:02 keyring_file -- keyring/file.sh@30 -- # tgtpid=84546 00:20:29.282 21:34:02 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84546 00:20:29.282 21:34:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84546 ']' 00:20:29.282 21:34:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.282 21:34:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.282 21:34:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.282 21:34:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.282 21:34:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:29.282 [2024-07-15 21:34:02.550238] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:29.282 [2024-07-15 21:34:02.550303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84546 ] 00:20:29.539 [2024-07-15 21:34:02.691964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.539 [2024-07-15 21:34:02.790651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.539 [2024-07-15 21:34:02.831569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:30.101 21:34:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.101 21:34:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:30.101 21:34:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:30.101 21:34:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.101 21:34:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:30.101 [2024-07-15 21:34:03.402198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.101 null0 00:20:30.101 [2024-07-15 21:34:03.434101] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.101 [2024-07-15 21:34:03.434300] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:30.101 [2024-07-15 21:34:03.442092] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:30.101 21:34:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.101 21:34:03 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:30.101 21:34:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:30.102 [2024-07-15 21:34:03.458065] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:30.102 request: 00:20:30.102 { 00:20:30.102 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:30.102 "secure_channel": false, 00:20:30.102 "listen_address": { 00:20:30.102 "trtype": "tcp", 00:20:30.102 "traddr": "127.0.0.1", 00:20:30.102 "trsvcid": "4420" 00:20:30.102 }, 00:20:30.102 "method": "nvmf_subsystem_add_listener", 00:20:30.102 "req_id": 1 00:20:30.102 } 00:20:30.102 Got JSON-RPC error response 00:20:30.102 response: 00:20:30.102 { 00:20:30.102 "code": -32602, 00:20:30.102 "message": "Invalid parameters" 00:20:30.102 } 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:30.102 21:34:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:30.357 21:34:03 keyring_file -- keyring/file.sh@46 -- # bperfpid=84563 00:20:30.357 21:34:03 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:30.357 21:34:03 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84563 /var/tmp/bperf.sock 00:20:30.357 21:34:03 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84563 ']' 00:20:30.357 21:34:03 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.357 21:34:03 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.357 21:34:03 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.357 21:34:03 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.357 21:34:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:30.357 [2024-07-15 21:34:03.518142] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:30.357 [2024-07-15 21:34:03.518206] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84563 ] 00:20:30.357 [2024-07-15 21:34:03.660032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.632 [2024-07-15 21:34:03.760359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.632 [2024-07-15 21:34:03.801979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:31.196 21:34:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.196 21:34:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:31.196 21:34:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:31.196 21:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:31.196 21:34:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d8Vs1wH54l 00:20:31.196 21:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d8Vs1wH54l 00:20:31.452 21:34:04 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:20:31.452 21:34:04 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:20:31.452 21:34:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:31.452 21:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.452 21:34:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:31.708 21:34:04 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Tug50sblky == \/\t\m\p\/\t\m\p\.\T\u\g\5\0\s\b\l\k\y ]] 00:20:31.708 21:34:04 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:20:31.708 21:34:04 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:31.708 21:34:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:31.708 21:34:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.708 21:34:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:31.964 21:34:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.d8Vs1wH54l == \/\t\m\p\/\t\m\p\.\d\8\V\s\1\w\H\5\4\l ]] 00:20:31.964 21:34:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:20:31.964 21:34:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:31.964 21:34:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:31.964 21:34:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:31.964 21:34:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:31.964 21:34:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:32.221 21:34:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:20:32.221 21:34:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:20:32.221 21:34:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:32.221 21:34:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:32.221 21:34:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:32.221 21:34:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:32.221 21:34:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:32.221 21:34:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:32.221 21:34:05 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:32.221 21:34:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:32.478 [2024-07-15 21:34:05.742489] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.478 nvme0n1 00:20:32.478 21:34:05 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:20:32.478 21:34:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:32.478 21:34:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:32.478 21:34:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:32.478 21:34:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:32.478 21:34:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:32.734 21:34:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:20:32.734 21:34:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:20:32.734 21:34:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:32.734 21:34:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:32.734 21:34:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:32.734 21:34:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:32.734 21:34:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:32.991 21:34:06 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:20:32.991 21:34:06 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:32.991 Running I/O for 1 seconds... 00:20:34.363 00:20:34.363 Latency(us) 00:20:34.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.363 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:34.363 nvme0n1 : 1.00 16629.06 64.96 0.00 0.00 7676.57 4237.47 15475.97 00:20:34.363 =================================================================================================================== 00:20:34.363 Total : 16629.06 64.96 0.00 0.00 7676.57 4237.47 15475.97 00:20:34.363 0 00:20:34.363 21:34:07 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:34.363 21:34:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:34.363 21:34:07 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:20:34.363 21:34:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:34.363 21:34:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:34.363 21:34:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:34.363 21:34:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:34.363 21:34:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:34.620 21:34:07 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:20:34.620 21:34:07 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:20:34.620 21:34:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:34.620 21:34:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:34.620 21:34:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:34.620 21:34:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:34.620 21:34:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:34.620 21:34:07 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:34.620 21:34:07 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:34.620 21:34:07 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:34.620 21:34:07 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:34.620 21:34:07 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:34.620 21:34:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.620 21:34:07 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:34.620 21:34:07 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.620 21:34:07 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:34.620 21:34:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:34.876 [2024-07-15 21:34:08.141662] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:34.876 [2024-07-15 21:34:08.142384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d65590 (107): Transport endpoint is not connected 00:20:34.876 [2024-07-15 21:34:08.143354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d65590 (9): Bad file descriptor 00:20:34.876 [2024-07-15 21:34:08.144349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:34.876 [2024-07-15 21:34:08.144390] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:34.876 [2024-07-15 21:34:08.144406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:34.876 request: 00:20:34.876 { 00:20:34.876 "name": "nvme0", 00:20:34.876 "trtype": "tcp", 00:20:34.876 "traddr": "127.0.0.1", 00:20:34.876 "adrfam": "ipv4", 00:20:34.876 "trsvcid": "4420", 00:20:34.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.876 "prchk_reftag": false, 00:20:34.876 "prchk_guard": false, 00:20:34.876 "hdgst": false, 00:20:34.876 "ddgst": false, 00:20:34.876 "psk": "key1", 00:20:34.876 "method": "bdev_nvme_attach_controller", 00:20:34.876 "req_id": 1 00:20:34.876 } 00:20:34.876 Got JSON-RPC error response 00:20:34.876 response: 00:20:34.876 { 00:20:34.876 "code": -5, 00:20:34.876 "message": "Input/output error" 00:20:34.876 } 00:20:34.876 21:34:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:34.876 21:34:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:34.876 21:34:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:34.876 21:34:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:34.876 21:34:08 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:20:34.876 21:34:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:34.876 21:34:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:34.876 21:34:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:34.876 21:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:34.876 21:34:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:35.133 21:34:08 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:20:35.133 21:34:08 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:20:35.133 21:34:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:35.133 21:34:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:35.133 21:34:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:35.133 21:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:35.133 21:34:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:35.389 21:34:08 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:35.389 21:34:08 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:20:35.389 21:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:35.646 21:34:08 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:20:35.646 21:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:35.646 21:34:08 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:20:35.646 21:34:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:35.646 21:34:08 keyring_file -- keyring/file.sh@77 -- # jq length 00:20:35.904 21:34:09 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:20:35.904 21:34:09 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Tug50sblky 00:20:35.904 21:34:09 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:35.904 21:34:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:35.904 21:34:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:35.904 21:34:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:35.904 21:34:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.904 21:34:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:35.904 21:34:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.904 21:34:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:35.904 21:34:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:36.160 [2024-07-15 21:34:09.380853] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Tug50sblky': 0100660 00:20:36.160 [2024-07-15 21:34:09.380895] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:36.160 request: 00:20:36.160 { 00:20:36.160 "name": "key0", 00:20:36.160 "path": "/tmp/tmp.Tug50sblky", 00:20:36.160 "method": "keyring_file_add_key", 00:20:36.160 "req_id": 1 00:20:36.160 } 00:20:36.160 Got JSON-RPC error response 00:20:36.160 response: 00:20:36.160 { 00:20:36.160 "code": -1, 00:20:36.160 "message": "Operation not permitted" 00:20:36.160 } 00:20:36.160 21:34:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:36.160 21:34:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.161 21:34:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.161 21:34:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.161 21:34:09 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Tug50sblky 00:20:36.161 21:34:09 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:36.161 21:34:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tug50sblky 00:20:36.418 21:34:09 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Tug50sblky 00:20:36.418 21:34:09 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:20:36.418 21:34:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:36.418 21:34:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:36.418 21:34:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:36.418 21:34:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:36.418 21:34:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:36.676 21:34:09 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:20:36.676 21:34:09 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:36.676 21:34:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:20:36.676 21:34:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:36.676 21:34:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:36.676 21:34:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.676 21:34:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:36.676 21:34:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.676 21:34:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:36.676 21:34:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:36.676 [2024-07-15 21:34:09.999995] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Tug50sblky': No such file or directory 00:20:36.676 [2024-07-15 21:34:10.000043] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:36.676 [2024-07-15 21:34:10.000069] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:36.676 [2024-07-15 21:34:10.000078] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:36.676 [2024-07-15 21:34:10.000087] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:36.676 request: 00:20:36.676 { 00:20:36.676 "name": "nvme0", 00:20:36.676 "trtype": "tcp", 00:20:36.676 "traddr": "127.0.0.1", 00:20:36.676 "adrfam": "ipv4", 00:20:36.676 "trsvcid": "4420", 00:20:36.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:36.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:36.676 "prchk_reftag": false, 00:20:36.676 "prchk_guard": false, 00:20:36.676 "hdgst": false, 00:20:36.676 "ddgst": false, 00:20:36.676 "psk": "key0", 00:20:36.676 "method": "bdev_nvme_attach_controller", 00:20:36.676 "req_id": 1 00:20:36.676 } 00:20:36.676 Got JSON-RPC error response 00:20:36.676 response: 00:20:36.676 { 00:20:36.676 "code": -19, 00:20:36.676 "message": "No such device" 00:20:36.676 } 00:20:36.676 21:34:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:20:36.676 21:34:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.676 21:34:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.676 21:34:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.676 21:34:10 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:20:36.676 21:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:36.934 21:34:10 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2bE9Pjv6pF 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:36.935 21:34:10 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:36.935 21:34:10 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:20:36.935 21:34:10 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:36.935 21:34:10 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:36.935 21:34:10 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:20:36.935 21:34:10 keyring_file -- nvmf/common.sh@705 -- # python - 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2bE9Pjv6pF 00:20:36.935 21:34:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2bE9Pjv6pF 00:20:37.194 21:34:10 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.2bE9Pjv6pF 00:20:37.194 21:34:10 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2bE9Pjv6pF 00:20:37.194 21:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2bE9Pjv6pF 00:20:37.194 21:34:10 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:37.194 21:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:37.453 nvme0n1 00:20:37.712 21:34:10 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:20:37.712 21:34:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:37.712 21:34:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:37.712 21:34:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:37.712 21:34:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:37.712 21:34:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:37.712 21:34:11 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:20:37.712 21:34:11 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:20:37.712 21:34:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:37.970 21:34:11 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:20:37.970 21:34:11 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:20:37.970 21:34:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:37.970 21:34:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:37.970 21:34:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:38.228 21:34:11 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:20:38.228 21:34:11 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:20:38.228 21:34:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:38.228 21:34:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:38.228 21:34:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:38.228 21:34:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:38.228 21:34:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:38.486 21:34:11 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:20:38.486 21:34:11 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:38.486 21:34:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:38.744 21:34:12 keyring_file -- keyring/file.sh@104 -- # jq length 00:20:38.744 21:34:12 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:20:38.744 21:34:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:39.002 21:34:12 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:20:39.002 21:34:12 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2bE9Pjv6pF 00:20:39.002 21:34:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2bE9Pjv6pF 00:20:39.260 21:34:12 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d8Vs1wH54l 00:20:39.260 21:34:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d8Vs1wH54l 00:20:39.528 21:34:12 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:39.528 21:34:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:39.785 nvme0n1 00:20:39.785 21:34:13 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:20:39.785 21:34:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:40.053 21:34:13 keyring_file -- keyring/file.sh@112 -- # config='{ 00:20:40.053 "subsystems": [ 00:20:40.053 { 00:20:40.053 "subsystem": "keyring", 00:20:40.053 "config": [ 00:20:40.053 { 00:20:40.053 "method": "keyring_file_add_key", 00:20:40.053 "params": { 00:20:40.053 "name": "key0", 00:20:40.053 "path": "/tmp/tmp.2bE9Pjv6pF" 00:20:40.053 } 00:20:40.053 }, 00:20:40.053 { 00:20:40.053 "method": "keyring_file_add_key", 00:20:40.053 "params": { 00:20:40.053 "name": "key1", 00:20:40.053 "path": "/tmp/tmp.d8Vs1wH54l" 00:20:40.053 } 00:20:40.053 } 00:20:40.053 ] 00:20:40.053 }, 00:20:40.053 { 00:20:40.053 "subsystem": "iobuf", 00:20:40.053 "config": [ 00:20:40.053 { 00:20:40.053 "method": "iobuf_set_options", 00:20:40.053 "params": { 00:20:40.053 "small_pool_count": 8192, 00:20:40.053 "large_pool_count": 1024, 00:20:40.053 "small_bufsize": 8192, 00:20:40.053 "large_bufsize": 135168 00:20:40.053 } 00:20:40.053 } 00:20:40.053 ] 00:20:40.053 }, 00:20:40.053 { 00:20:40.053 "subsystem": "sock", 00:20:40.053 "config": [ 00:20:40.053 { 00:20:40.053 "method": "sock_set_default_impl", 00:20:40.053 "params": { 00:20:40.053 "impl_name": "uring" 00:20:40.053 } 00:20:40.053 }, 00:20:40.053 { 00:20:40.053 "method": "sock_impl_set_options", 00:20:40.053 "params": { 00:20:40.053 "impl_name": "ssl", 00:20:40.053 "recv_buf_size": 4096, 00:20:40.053 "send_buf_size": 4096, 00:20:40.053 "enable_recv_pipe": true, 00:20:40.053 "enable_quickack": false, 00:20:40.053 "enable_placement_id": 0, 00:20:40.054 "enable_zerocopy_send_server": true, 00:20:40.054 "enable_zerocopy_send_client": false, 00:20:40.054 "zerocopy_threshold": 0, 00:20:40.054 "tls_version": 0, 00:20:40.054 "enable_ktls": false 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "sock_impl_set_options", 00:20:40.054 "params": { 00:20:40.054 "impl_name": "posix", 00:20:40.054 "recv_buf_size": 2097152, 00:20:40.054 "send_buf_size": 2097152, 00:20:40.054 "enable_recv_pipe": true, 00:20:40.054 "enable_quickack": false, 00:20:40.054 "enable_placement_id": 0, 00:20:40.054 "enable_zerocopy_send_server": true, 00:20:40.054 "enable_zerocopy_send_client": false, 00:20:40.054 "zerocopy_threshold": 0, 00:20:40.054 "tls_version": 0, 00:20:40.054 "enable_ktls": false 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "sock_impl_set_options", 00:20:40.054 "params": { 00:20:40.054 "impl_name": "uring", 00:20:40.054 "recv_buf_size": 2097152, 00:20:40.054 "send_buf_size": 2097152, 00:20:40.054 "enable_recv_pipe": true, 00:20:40.054 "enable_quickack": false, 00:20:40.054 "enable_placement_id": 0, 00:20:40.054 "enable_zerocopy_send_server": false, 00:20:40.054 "enable_zerocopy_send_client": false, 00:20:40.054 "zerocopy_threshold": 0, 00:20:40.054 "tls_version": 0, 00:20:40.054 "enable_ktls": false 00:20:40.054 } 00:20:40.054 } 00:20:40.054 ] 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "subsystem": "vmd", 00:20:40.054 "config": [] 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "subsystem": "accel", 00:20:40.054 "config": [ 00:20:40.054 { 00:20:40.054 "method": "accel_set_options", 00:20:40.054 "params": { 00:20:40.054 "small_cache_size": 128, 00:20:40.054 "large_cache_size": 16, 00:20:40.054 "task_count": 2048, 00:20:40.054 "sequence_count": 2048, 00:20:40.054 "buf_count": 2048 00:20:40.054 } 00:20:40.054 } 00:20:40.054 ] 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "subsystem": "bdev", 00:20:40.054 "config": [ 00:20:40.054 { 00:20:40.054 "method": "bdev_set_options", 00:20:40.054 "params": { 00:20:40.054 "bdev_io_pool_size": 65535, 00:20:40.054 "bdev_io_cache_size": 256, 00:20:40.054 "bdev_auto_examine": true, 00:20:40.054 "iobuf_small_cache_size": 128, 00:20:40.054 "iobuf_large_cache_size": 16 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "bdev_raid_set_options", 00:20:40.054 "params": { 00:20:40.054 "process_window_size_kb": 1024 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "bdev_iscsi_set_options", 00:20:40.054 "params": { 00:20:40.054 "timeout_sec": 30 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "bdev_nvme_set_options", 00:20:40.054 "params": { 00:20:40.054 "action_on_timeout": "none", 00:20:40.054 "timeout_us": 0, 00:20:40.054 "timeout_admin_us": 0, 00:20:40.054 "keep_alive_timeout_ms": 10000, 00:20:40.054 "arbitration_burst": 0, 00:20:40.054 "low_priority_weight": 0, 00:20:40.054 "medium_priority_weight": 0, 00:20:40.054 "high_priority_weight": 0, 00:20:40.054 "nvme_adminq_poll_period_us": 10000, 00:20:40.054 "nvme_ioq_poll_period_us": 0, 00:20:40.054 "io_queue_requests": 512, 00:20:40.054 "delay_cmd_submit": true, 00:20:40.054 "transport_retry_count": 4, 00:20:40.054 "bdev_retry_count": 3, 00:20:40.054 "transport_ack_timeout": 0, 00:20:40.054 "ctrlr_loss_timeout_sec": 0, 00:20:40.054 "reconnect_delay_sec": 0, 00:20:40.054 "fast_io_fail_timeout_sec": 0, 00:20:40.054 "disable_auto_failback": false, 00:20:40.054 "generate_uuids": false, 00:20:40.054 "transport_tos": 0, 00:20:40.054 "nvme_error_stat": false, 00:20:40.054 "rdma_srq_size": 0, 00:20:40.054 "io_path_stat": false, 00:20:40.054 "allow_accel_sequence": false, 00:20:40.054 "rdma_max_cq_size": 0, 00:20:40.054 "rdma_cm_event_timeout_ms": 0, 00:20:40.054 "dhchap_digests": [ 00:20:40.054 "sha256", 00:20:40.054 "sha384", 00:20:40.054 "sha512" 00:20:40.054 ], 00:20:40.054 "dhchap_dhgroups": [ 00:20:40.054 "null", 00:20:40.054 "ffdhe2048", 00:20:40.054 "ffdhe3072", 00:20:40.054 "ffdhe4096", 00:20:40.054 "ffdhe6144", 00:20:40.054 "ffdhe8192" 00:20:40.054 ] 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "bdev_nvme_attach_controller", 00:20:40.054 "params": { 00:20:40.054 "name": "nvme0", 00:20:40.054 "trtype": "TCP", 00:20:40.054 "adrfam": "IPv4", 00:20:40.054 "traddr": "127.0.0.1", 00:20:40.054 "trsvcid": "4420", 00:20:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:40.054 "prchk_reftag": false, 00:20:40.054 "prchk_guard": false, 00:20:40.054 "ctrlr_loss_timeout_sec": 0, 00:20:40.054 "reconnect_delay_sec": 0, 00:20:40.054 "fast_io_fail_timeout_sec": 0, 00:20:40.054 "psk": "key0", 00:20:40.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:40.054 "hdgst": false, 00:20:40.054 "ddgst": false 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "bdev_nvme_set_hotplug", 00:20:40.054 "params": { 00:20:40.054 "period_us": 100000, 00:20:40.054 "enable": false 00:20:40.054 } 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "method": "bdev_wait_for_examine" 00:20:40.054 } 00:20:40.054 ] 00:20:40.054 }, 00:20:40.054 { 00:20:40.054 "subsystem": "nbd", 00:20:40.054 "config": [] 00:20:40.054 } 00:20:40.054 ] 00:20:40.054 }' 00:20:40.054 21:34:13 keyring_file -- keyring/file.sh@114 -- # killprocess 84563 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84563 ']' 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84563 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84563 00:20:40.054 killing process with pid 84563 00:20:40.054 Received shutdown signal, test time was about 1.000000 seconds 00:20:40.054 00:20:40.054 Latency(us) 00:20:40.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.054 =================================================================================================================== 00:20:40.054 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84563' 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@967 -- # kill 84563 00:20:40.054 21:34:13 keyring_file -- common/autotest_common.sh@972 -- # wait 84563 00:20:40.313 21:34:13 keyring_file -- keyring/file.sh@117 -- # bperfpid=84796 00:20:40.313 21:34:13 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84796 /var/tmp/bperf.sock 00:20:40.313 21:34:13 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:20:40.313 "subsystems": [ 00:20:40.313 { 00:20:40.313 "subsystem": "keyring", 00:20:40.313 "config": [ 00:20:40.313 { 00:20:40.313 "method": "keyring_file_add_key", 00:20:40.313 "params": { 00:20:40.313 "name": "key0", 00:20:40.313 "path": "/tmp/tmp.2bE9Pjv6pF" 00:20:40.313 } 00:20:40.313 }, 00:20:40.313 { 00:20:40.313 "method": "keyring_file_add_key", 00:20:40.313 "params": { 00:20:40.313 "name": "key1", 00:20:40.313 "path": "/tmp/tmp.d8Vs1wH54l" 00:20:40.313 } 00:20:40.313 } 00:20:40.313 ] 00:20:40.313 }, 00:20:40.313 { 00:20:40.313 "subsystem": "iobuf", 00:20:40.313 "config": [ 00:20:40.313 { 00:20:40.313 "method": "iobuf_set_options", 00:20:40.313 "params": { 00:20:40.313 "small_pool_count": 8192, 00:20:40.313 "large_pool_count": 1024, 00:20:40.313 "small_bufsize": 8192, 00:20:40.313 "large_bufsize": 135168 00:20:40.313 } 00:20:40.313 } 00:20:40.313 ] 00:20:40.313 }, 00:20:40.313 { 00:20:40.313 "subsystem": "sock", 00:20:40.313 "config": [ 00:20:40.313 { 00:20:40.313 "method": "sock_set_default_impl", 00:20:40.313 "params": { 00:20:40.313 "impl_name": "uring" 00:20:40.313 } 00:20:40.313 }, 00:20:40.313 { 00:20:40.313 "method": "sock_impl_set_options", 00:20:40.313 "params": { 00:20:40.313 "impl_name": "ssl", 00:20:40.313 "recv_buf_size": 4096, 00:20:40.313 "send_buf_size": 4096, 00:20:40.313 "enable_recv_pipe": true, 00:20:40.313 "enable_quickack": false, 00:20:40.313 "enable_placement_id": 0, 00:20:40.313 "enable_zerocopy_send_server": true, 00:20:40.313 "enable_zerocopy_send_client": false, 00:20:40.314 "zerocopy_threshold": 0, 00:20:40.314 "tls_version": 0, 00:20:40.314 "enable_ktls": false 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "sock_impl_set_options", 00:20:40.314 "params": { 00:20:40.314 "impl_name": "posix", 00:20:40.314 "recv_buf_size": 2097152, 00:20:40.314 "send_buf_size": 2097152, 00:20:40.314 "enable_recv_pipe": true, 00:20:40.314 "enable_quickack": false, 00:20:40.314 "enable_placement_id": 0, 00:20:40.314 "enable_zerocopy_send_server": true, 00:20:40.314 "enable_zerocopy_send_client": false, 00:20:40.314 "zerocopy_threshold": 0, 00:20:40.314 "tls_version": 0, 00:20:40.314 "enable_ktls": false 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "sock_impl_set_options", 00:20:40.314 "params": { 00:20:40.314 "impl_name": "uring", 00:20:40.314 "recv_buf_size": 2097152, 00:20:40.314 "send_buf_size": 2097152, 00:20:40.314 "enable_recv_pipe": true, 00:20:40.314 "enable_quickack": false, 00:20:40.314 "enable_placement_id": 0, 00:20:40.314 "enable_zerocopy_send_server": false, 00:20:40.314 "enable_zerocopy_send_client": false, 00:20:40.314 "zerocopy_threshold": 0, 00:20:40.314 "tls_version": 0, 00:20:40.314 "enable_ktls": false 00:20:40.314 } 00:20:40.314 } 00:20:40.314 ] 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "subsystem": "vmd", 00:20:40.314 "config": [] 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "subsystem": "accel", 00:20:40.314 "config": [ 00:20:40.314 { 00:20:40.314 "method": "accel_set_options", 00:20:40.314 "params": { 00:20:40.314 "small_cache_size": 128, 00:20:40.314 "large_cache_size": 16, 00:20:40.314 "task_count": 2048, 00:20:40.314 "sequence_count": 2048, 00:20:40.314 "buf_count": 2048 00:20:40.314 } 00:20:40.314 } 00:20:40.314 ] 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "subsystem": "bdev", 00:20:40.314 "config": [ 00:20:40.314 { 00:20:40.314 "method": "bdev_set_options", 00:20:40.314 "params": { 00:20:40.314 "bdev_io_pool_size": 65535, 00:20:40.314 "bdev_io_cache_size": 256, 00:20:40.314 "bdev_auto_examine": true, 00:20:40.314 "iobuf_small_cache_size": 128, 00:20:40.314 "iobuf_large_cache_size": 16 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "bdev_raid_set_options", 00:20:40.314 "params": { 00:20:40.314 "process_window_size_kb": 1024 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "bdev_iscsi_set_options", 00:20:40.314 "params": { 00:20:40.314 "timeout_sec": 30 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "bdev_nvme_set_options", 00:20:40.314 "params": { 00:20:40.314 "action_on_timeout": "none", 00:20:40.314 "timeout_us": 0, 00:20:40.314 "timeout_admin_us": 0, 00:20:40.314 "keep_alive_timeout_ms": 10000, 00:20:40.314 "arbitration_burst": 0, 00:20:40.314 "low_priority_weight": 0, 00:20:40.314 "medium_priority_weight": 0, 00:20:40.314 "high_priority_weight": 0, 00:20:40.314 "nvme_adminq_poll_period_us": 10000, 00:20:40.314 "nvme_ioq_poll_period_us": 0, 00:20:40.314 "io_queue_requests": 512, 00:20:40.314 "delay_cm 21:34:13 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84796 ']' 00:20:40.314 d_submit": true, 00:20:40.314 "transport_retry_count": 4, 00:20:40.314 "bdev_retry_count": 3, 00:20:40.314 "transport_ack_timeout": 0, 00:20:40.314 "ctrlr_loss_timeout_sec": 0, 00:20:40.314 "reconnect_delay_sec": 0, 00:20:40.314 "fast_io_fail_timeout_sec": 0, 00:20:40.314 "disable_auto_failback": false, 00:20:40.314 "generate_uuids": false, 00:20:40.314 "transport_tos": 0, 00:20:40.314 "nvme_error_stat": false, 00:20:40.314 "rdma_srq_size": 0, 00:20:40.314 "io_path_stat": false, 00:20:40.314 "allow_accel_sequence": false, 00:20:40.314 "rdma_max_cq_size": 0, 00:20:40.314 "rdma_cm_event_timeout_ms": 0, 00:20:40.314 "dhchap_digests": [ 00:20:40.314 "sha256", 00:20:40.314 "sha384", 00:20:40.314 "sha512" 00:20:40.314 ], 00:20:40.314 "dhchap_dhgroups": [ 00:20:40.314 "null", 00:20:40.314 "ffdhe2048", 00:20:40.314 "ffdhe3072", 00:20:40.314 "ffdhe4096", 00:20:40.314 "ffdhe6144", 00:20:40.314 "ffdhe8192" 00:20:40.314 ] 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "bdev_nvme_attach_controller", 00:20:40.314 "params": { 00:20:40.314 "name": "nvme0", 00:20:40.314 "trtype": "TCP", 00:20:40.314 "adrfam": "IPv4", 00:20:40.314 "traddr": "127.0.0.1", 00:20:40.314 "trsvcid": "4420", 00:20:40.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:40.314 "prchk_reftag": false, 00:20:40.314 "prchk_guard": false, 00:20:40.314 "ctrlr_loss_timeout_sec": 0, 00:20:40.314 "reconnect_delay_sec": 0, 00:20:40.314 "fast_io_fail_timeout_sec": 0, 00:20:40.314 "psk": "key0", 00:20:40.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:40.314 "hdgst": false, 00:20:40.314 "ddgst": false 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "bdev_nvme_set_hotplug", 00:20:40.314 "params": { 00:20:40.314 "period_us": 100000, 00:20:40.314 "enable": false 00:20:40.314 } 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "method": "bdev_wait_for_examine" 00:20:40.314 } 00:20:40.314 ] 00:20:40.314 }, 00:20:40.314 { 00:20:40.314 "subsystem": "nbd", 00:20:40.314 "config": [] 00:20:40.314 } 00:20:40.314 ] 00:20:40.314 }' 00:20:40.314 21:34:13 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:40.314 21:34:13 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:40.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:40.314 21:34:13 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.314 21:34:13 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:40.314 21:34:13 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.314 21:34:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:40.314 [2024-07-15 21:34:13.614265] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:40.314 [2024-07-15 21:34:13.614332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84796 ] 00:20:40.572 [2024-07-15 21:34:13.759930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.572 [2024-07-15 21:34:13.862684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.829 [2024-07-15 21:34:13.986840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:40.829 [2024-07-15 21:34:14.037295] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.394 21:34:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.394 21:34:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:20:41.394 21:34:14 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:20:41.394 21:34:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:41.394 21:34:14 keyring_file -- keyring/file.sh@120 -- # jq length 00:20:41.394 21:34:14 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:20:41.394 21:34:14 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:20:41.394 21:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:41.394 21:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:41.394 21:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:41.394 21:34:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:41.394 21:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:41.651 21:34:14 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:41.651 21:34:14 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:20:41.651 21:34:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:41.651 21:34:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:41.651 21:34:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:41.651 21:34:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:41.651 21:34:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:41.935 21:34:15 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:20:41.935 21:34:15 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:20:41.935 21:34:15 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:20:41.935 21:34:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:42.204 21:34:15 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:20:42.204 21:34:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:42.204 21:34:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2bE9Pjv6pF /tmp/tmp.d8Vs1wH54l 00:20:42.204 21:34:15 keyring_file -- keyring/file.sh@20 -- # killprocess 84796 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84796 ']' 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84796 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84796 00:20:42.204 killing process with pid 84796 00:20:42.204 Received shutdown signal, test time was about 1.000000 seconds 00:20:42.204 00:20:42.204 Latency(us) 00:20:42.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.204 =================================================================================================================== 00:20:42.204 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84796' 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@967 -- # kill 84796 00:20:42.204 21:34:15 keyring_file -- common/autotest_common.sh@972 -- # wait 84796 00:20:42.463 21:34:15 keyring_file -- keyring/file.sh@21 -- # killprocess 84546 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84546 ']' 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84546 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@953 -- # uname 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84546 00:20:42.463 killing process with pid 84546 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84546' 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@967 -- # kill 84546 00:20:42.463 [2024-07-15 21:34:15.632174] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:42.463 21:34:15 keyring_file -- common/autotest_common.sh@972 -- # wait 84546 00:20:42.722 00:20:42.722 real 0m13.747s 00:20:42.722 user 0m33.097s 00:20:42.722 sys 0m3.237s 00:20:42.722 21:34:15 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.722 ************************************ 00:20:42.722 END TEST keyring_file 00:20:42.722 ************************************ 00:20:42.722 21:34:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:42.722 21:34:16 -- common/autotest_common.sh@1142 -- # return 0 00:20:42.722 21:34:16 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:20:42.722 21:34:16 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:42.722 21:34:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:42.722 21:34:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.722 21:34:16 -- common/autotest_common.sh@10 -- # set +x 00:20:42.722 ************************************ 00:20:42.722 START TEST keyring_linux 00:20:42.722 ************************************ 00:20:42.722 21:34:16 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:42.982 * Looking for test storage... 00:20:42.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6f940fe-c85a-454e-b75c-95123b6e9f66 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=b6f940fe-c85a-454e-b75c-95123b6e9f66 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.982 21:34:16 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.982 21:34:16 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.982 21:34:16 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.982 21:34:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.982 21:34:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.982 21:34:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.982 21:34:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:42.982 21:34:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:42.982 /tmp/:spdk-test:key0 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:20:42.982 21:34:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:42.982 21:34:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:42.982 /tmp/:spdk-test:key1 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84907 00:20:42.982 21:34:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84907 00:20:42.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.982 21:34:16 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84907 ']' 00:20:42.982 21:34:16 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.982 21:34:16 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.982 21:34:16 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.982 21:34:16 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.982 21:34:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:43.241 [2024-07-15 21:34:16.363985] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:43.241 [2024-07-15 21:34:16.364080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84907 ] 00:20:43.241 [2024-07-15 21:34:16.511491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.500 [2024-07-15 21:34:16.610094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.500 [2024-07-15 21:34:16.652527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:20:44.065 21:34:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:44.065 [2024-07-15 21:34:17.263185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.065 null0 00:20:44.065 [2024-07-15 21:34:17.295079] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.065 [2024-07-15 21:34:17.295416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.065 21:34:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:44.065 38864387 00:20:44.065 21:34:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:44.065 1000553176 00:20:44.065 21:34:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84921 00:20:44.065 21:34:17 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:44.065 21:34:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84921 /var/tmp/bperf.sock 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84921 ']' 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:44.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.065 21:34:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:44.065 [2024-07-15 21:34:17.377972] Starting SPDK v24.09-pre git sha1 0663932f5 / DPDK 24.03.0 initialization... 00:20:44.065 [2024-07-15 21:34:17.378046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84921 ] 00:20:44.322 [2024-07-15 21:34:17.522346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.322 [2024-07-15 21:34:17.629965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.256 21:34:18 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.256 21:34:18 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:20:45.256 21:34:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:45.256 21:34:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:45.256 21:34:18 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:45.256 21:34:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:45.514 [2024-07-15 21:34:18.719675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:45.514 21:34:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:45.514 21:34:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:45.772 [2024-07-15 21:34:18.963129] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.772 nvme0n1 00:20:45.772 21:34:19 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:45.772 21:34:19 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:45.772 21:34:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:45.772 21:34:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:45.772 21:34:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:45.772 21:34:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:46.030 21:34:19 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:46.030 21:34:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:46.030 21:34:19 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:46.030 21:34:19 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:46.030 21:34:19 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:46.030 21:34:19 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:46.030 21:34:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:46.288 21:34:19 keyring_linux -- keyring/linux.sh@25 -- # sn=38864387 00:20:46.288 21:34:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:46.288 21:34:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:46.288 21:34:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 38864387 == \3\8\8\6\4\3\8\7 ]] 00:20:46.288 21:34:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 38864387 00:20:46.288 21:34:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:46.288 21:34:19 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:46.288 Running I/O for 1 seconds... 00:20:47.232 00:20:47.232 Latency(us) 00:20:47.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.232 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:47.232 nvme0n1 : 1.01 17215.24 67.25 0.00 0.00 7404.86 5948.25 12264.97 00:20:47.232 =================================================================================================================== 00:20:47.232 Total : 17215.24 67.25 0.00 0.00 7404.86 5948.25 12264.97 00:20:47.232 0 00:20:47.232 21:34:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:47.232 21:34:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:47.490 21:34:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:47.490 21:34:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:47.490 21:34:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:47.490 21:34:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:47.490 21:34:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:47.490 21:34:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:47.747 21:34:21 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:47.747 21:34:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:47.747 21:34:21 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:47.747 21:34:21 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:47.747 21:34:21 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:20:47.747 21:34:21 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:47.747 21:34:21 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:20:47.747 21:34:21 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.747 21:34:21 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:20:47.747 21:34:21 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.747 21:34:21 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:47.747 21:34:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:48.004 [2024-07-15 21:34:21.231102] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:48.004 [2024-07-15 21:34:21.232075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b6460 (107): Transport endpoint is not connected 00:20:48.004 [2024-07-15 21:34:21.233065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b6460 (9): Bad file descriptor 00:20:48.004 [2024-07-15 21:34:21.234060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:48.004 [2024-07-15 21:34:21.234182] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:48.004 [2024-07-15 21:34:21.234253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:48.004 request: 00:20:48.004 { 00:20:48.004 "name": "nvme0", 00:20:48.004 "trtype": "tcp", 00:20:48.004 "traddr": "127.0.0.1", 00:20:48.004 "adrfam": "ipv4", 00:20:48.004 "trsvcid": "4420", 00:20:48.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:48.004 "prchk_reftag": false, 00:20:48.004 "prchk_guard": false, 00:20:48.004 "hdgst": false, 00:20:48.004 "ddgst": false, 00:20:48.004 "psk": ":spdk-test:key1", 00:20:48.004 "method": "bdev_nvme_attach_controller", 00:20:48.004 "req_id": 1 00:20:48.004 } 00:20:48.004 Got JSON-RPC error response 00:20:48.004 response: 00:20:48.004 { 00:20:48.004 "code": -5, 00:20:48.004 "message": "Input/output error" 00:20:48.004 } 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@33 -- # sn=38864387 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 38864387 00:20:48.004 1 links removed 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@33 -- # sn=1000553176 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1000553176 00:20:48.004 1 links removed 00:20:48.004 21:34:21 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84921 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84921 ']' 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84921 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84921 00:20:48.004 killing process with pid 84921 00:20:48.004 Received shutdown signal, test time was about 1.000000 seconds 00:20:48.004 00:20:48.004 Latency(us) 00:20:48.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.004 =================================================================================================================== 00:20:48.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84921' 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@967 -- # kill 84921 00:20:48.004 21:34:21 keyring_linux -- common/autotest_common.sh@972 -- # wait 84921 00:20:48.262 21:34:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84907 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84907 ']' 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84907 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84907 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:48.262 killing process with pid 84907 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84907' 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@967 -- # kill 84907 00:20:48.262 21:34:21 keyring_linux -- common/autotest_common.sh@972 -- # wait 84907 00:20:48.520 00:20:48.520 real 0m5.834s 00:20:48.520 user 0m10.788s 00:20:48.520 sys 0m1.640s 00:20:48.520 21:34:21 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:48.520 ************************************ 00:20:48.520 END TEST keyring_linux 00:20:48.520 ************************************ 00:20:48.520 21:34:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:48.778 21:34:21 -- common/autotest_common.sh@1142 -- # return 0 00:20:48.778 21:34:21 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:20:48.778 21:34:21 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:20:48.778 21:34:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:20:48.778 21:34:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:20:48.778 21:34:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:20:48.778 21:34:21 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:20:48.778 21:34:21 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:20:48.778 21:34:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.778 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:48.778 21:34:21 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:20:48.778 21:34:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:48.778 21:34:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:48.778 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:51.305 INFO: APP EXITING 00:20:51.305 INFO: killing all VMs 00:20:51.305 INFO: killing vhost app 00:20:51.305 INFO: EXIT DONE 00:20:51.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:51.926 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:51.926 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:52.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:52.861 Cleaning 00:20:52.861 Removing: /var/run/dpdk/spdk0/config 00:20:52.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:52.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:52.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:52.861 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:52.861 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:52.861 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:52.861 Removing: /var/run/dpdk/spdk1/config 00:20:52.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:52.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:52.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:52.861 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:52.861 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:52.861 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:52.861 Removing: /var/run/dpdk/spdk2/config 00:20:52.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:52.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:52.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:52.861 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:52.861 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:52.861 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:52.861 Removing: /var/run/dpdk/spdk3/config 00:20:52.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:52.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:52.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:52.861 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:52.861 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:52.861 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:52.861 Removing: /var/run/dpdk/spdk4/config 00:20:52.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:52.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:52.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:52.861 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:52.861 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:52.861 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:52.861 Removing: /dev/shm/nvmf_trace.0 00:20:52.862 Removing: /dev/shm/spdk_tgt_trace.pid58754 00:20:52.862 Removing: /var/run/dpdk/spdk0 00:20:52.862 Removing: /var/run/dpdk/spdk1 00:20:52.862 Removing: /var/run/dpdk/spdk2 00:20:52.862 Removing: /var/run/dpdk/spdk3 00:20:52.862 Removing: /var/run/dpdk/spdk4 00:20:53.121 Removing: /var/run/dpdk/spdk_pid58609 00:20:53.121 Removing: /var/run/dpdk/spdk_pid58754 00:20:53.121 Removing: /var/run/dpdk/spdk_pid58947 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59033 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59055 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59165 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59183 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59301 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59486 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59626 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59691 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59761 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59847 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59924 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59957 00:20:53.121 Removing: /var/run/dpdk/spdk_pid59987 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60054 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60159 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60575 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60627 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60673 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60689 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60756 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60766 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60833 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60848 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60895 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60913 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60953 00:20:53.121 Removing: /var/run/dpdk/spdk_pid60971 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61088 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61129 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61198 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61255 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61274 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61338 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61367 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61407 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61436 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61471 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61505 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61540 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61574 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61609 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61643 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61678 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61709 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61749 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61778 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61812 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61848 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61881 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61920 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61952 00:20:53.121 Removing: /var/run/dpdk/spdk_pid61992 00:20:53.121 Removing: /var/run/dpdk/spdk_pid62024 00:20:53.379 Removing: /var/run/dpdk/spdk_pid62094 00:20:53.379 Removing: /var/run/dpdk/spdk_pid62187 00:20:53.379 Removing: /var/run/dpdk/spdk_pid62496 00:20:53.379 Removing: /var/run/dpdk/spdk_pid62508 00:20:53.379 Removing: /var/run/dpdk/spdk_pid62539 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62558 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62574 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62593 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62606 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62622 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62641 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62654 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62675 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62694 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62708 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62723 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62742 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62756 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62771 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62792 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62804 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62825 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62855 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62869 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62904 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62967 00:20:53.380 Removing: /var/run/dpdk/spdk_pid62991 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63006 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63029 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63044 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63046 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63094 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63102 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63136 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63151 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63155 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63170 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63174 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63189 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63193 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63208 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63232 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63263 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63273 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63301 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63317 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63320 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63366 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63378 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63404 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63412 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63419 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63428 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63440 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63446 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63455 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63462 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63531 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63573 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63672 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63711 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63756 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63766 00:20:53.380 Removing: /var/run/dpdk/spdk_pid63787 00:20:53.638 Removing: /var/run/dpdk/spdk_pid63807 00:20:53.639 Removing: /var/run/dpdk/spdk_pid63843 00:20:53.639 Removing: /var/run/dpdk/spdk_pid63854 00:20:53.639 Removing: /var/run/dpdk/spdk_pid63924 00:20:53.639 Removing: /var/run/dpdk/spdk_pid63940 00:20:53.639 Removing: /var/run/dpdk/spdk_pid63985 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64037 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64088 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64111 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64203 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64245 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64278 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64502 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64594 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64628 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64938 00:20:53.639 Removing: /var/run/dpdk/spdk_pid64976 00:20:53.639 Removing: /var/run/dpdk/spdk_pid65259 00:20:53.639 Removing: /var/run/dpdk/spdk_pid65658 00:20:53.639 Removing: /var/run/dpdk/spdk_pid65916 00:20:53.639 Removing: /var/run/dpdk/spdk_pid66677 00:20:53.639 Removing: /var/run/dpdk/spdk_pid67494 00:20:53.639 Removing: /var/run/dpdk/spdk_pid67610 00:20:53.639 Removing: /var/run/dpdk/spdk_pid67678 00:20:53.639 Removing: /var/run/dpdk/spdk_pid68922 00:20:53.639 Removing: /var/run/dpdk/spdk_pid69131 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72116 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72410 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72518 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72646 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72674 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72701 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72723 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72810 00:20:53.639 Removing: /var/run/dpdk/spdk_pid72943 00:20:53.639 Removing: /var/run/dpdk/spdk_pid73083 00:20:53.639 Removing: /var/run/dpdk/spdk_pid73158 00:20:53.639 Removing: /var/run/dpdk/spdk_pid73340 00:20:53.639 Removing: /var/run/dpdk/spdk_pid73418 00:20:53.639 Removing: /var/run/dpdk/spdk_pid73505 00:20:53.639 Removing: /var/run/dpdk/spdk_pid73808 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74195 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74197 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74476 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74490 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74504 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74535 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74545 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74847 00:20:53.639 Removing: /var/run/dpdk/spdk_pid74896 00:20:53.639 Removing: /var/run/dpdk/spdk_pid75169 00:20:53.639 Removing: /var/run/dpdk/spdk_pid75373 00:20:53.639 Removing: /var/run/dpdk/spdk_pid75745 00:20:53.639 Removing: /var/run/dpdk/spdk_pid76251 00:20:53.639 Removing: /var/run/dpdk/spdk_pid77022 00:20:53.639 Removing: /var/run/dpdk/spdk_pid77612 00:20:53.639 Removing: /var/run/dpdk/spdk_pid77614 00:20:53.639 Removing: /var/run/dpdk/spdk_pid79499 00:20:53.639 Removing: /var/run/dpdk/spdk_pid79559 00:20:53.639 Removing: /var/run/dpdk/spdk_pid79614 00:20:53.639 Removing: /var/run/dpdk/spdk_pid79674 00:20:53.639 Removing: /var/run/dpdk/spdk_pid79795 00:20:53.639 Removing: /var/run/dpdk/spdk_pid79850 00:20:53.898 Removing: /var/run/dpdk/spdk_pid79904 00:20:53.898 Removing: /var/run/dpdk/spdk_pid79959 00:20:53.898 Removing: /var/run/dpdk/spdk_pid80279 00:20:53.898 Removing: /var/run/dpdk/spdk_pid81420 00:20:53.898 Removing: /var/run/dpdk/spdk_pid81565 00:20:53.898 Removing: /var/run/dpdk/spdk_pid81806 00:20:53.898 Removing: /var/run/dpdk/spdk_pid82357 00:20:53.898 Removing: /var/run/dpdk/spdk_pid82517 00:20:53.898 Removing: /var/run/dpdk/spdk_pid82678 00:20:53.898 Removing: /var/run/dpdk/spdk_pid82776 00:20:53.898 Removing: /var/run/dpdk/spdk_pid82951 00:20:53.898 Removing: /var/run/dpdk/spdk_pid83065 00:20:53.898 Removing: /var/run/dpdk/spdk_pid83720 00:20:53.898 Removing: /var/run/dpdk/spdk_pid83755 00:20:53.898 Removing: /var/run/dpdk/spdk_pid83796 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84048 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84083 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84118 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84546 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84563 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84796 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84907 00:20:53.898 Removing: /var/run/dpdk/spdk_pid84921 00:20:53.898 Clean 00:20:53.898 21:34:27 -- common/autotest_common.sh@1451 -- # return 0 00:20:53.898 21:34:27 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:20:53.898 21:34:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.898 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:20:53.898 21:34:27 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:20:53.898 21:34:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.898 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:20:54.157 21:34:27 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:54.157 21:34:27 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:54.157 21:34:27 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:54.157 21:34:27 -- spdk/autotest.sh@391 -- # hash lcov 00:20:54.157 21:34:27 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:54.157 21:34:27 -- spdk/autotest.sh@393 -- # hostname 00:20:54.157 21:34:27 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:54.157 geninfo: WARNING: invalid characters removed from testname! 00:21:20.718 21:34:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:22.094 21:34:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:24.623 21:34:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:26.520 21:34:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:29.124 21:35:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:31.128 21:35:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:33.060 21:35:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:33.060 21:35:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.060 21:35:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:33.060 21:35:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.060 21:35:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.060 21:35:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.060 21:35:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.060 21:35:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.060 21:35:06 -- paths/export.sh@5 -- $ export PATH 00:21:33.060 21:35:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.060 21:35:06 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:33.060 21:35:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:21:33.060 21:35:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721079306.XXXXXX 00:21:33.060 21:35:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721079306.mggggb 00:21:33.060 21:35:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:21:33.060 21:35:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:21:33.060 21:35:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:33.060 21:35:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:33.060 21:35:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:33.060 21:35:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:21:33.060 21:35:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:21:33.060 21:35:06 -- common/autotest_common.sh@10 -- $ set +x 00:21:33.318 21:35:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:21:33.318 21:35:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:21:33.318 21:35:06 -- pm/common@17 -- $ local monitor 00:21:33.318 21:35:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:33.318 21:35:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:33.318 21:35:06 -- pm/common@25 -- $ sleep 1 00:21:33.318 21:35:06 -- pm/common@21 -- $ date +%s 00:21:33.318 21:35:06 -- pm/common@21 -- $ date +%s 00:21:33.318 21:35:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721079306 00:21:33.318 21:35:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721079306 00:21:33.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721079306_collect-vmstat.pm.log 00:21:33.318 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721079306_collect-cpu-load.pm.log 00:21:34.252 21:35:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:21:34.252 21:35:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:34.252 21:35:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:34.252 21:35:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:34.252 21:35:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:34.252 21:35:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:34.252 21:35:07 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:34.252 21:35:07 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:34.252 21:35:07 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:34.252 21:35:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:34.252 21:35:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:34.252 21:35:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:34.252 21:35:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:34.253 21:35:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:34.253 21:35:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:34.253 21:35:07 -- pm/common@44 -- $ pid=86699 00:21:34.253 21:35:07 -- pm/common@50 -- $ kill -TERM 86699 00:21:34.253 21:35:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:34.253 21:35:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:34.253 21:35:07 -- pm/common@44 -- $ pid=86701 00:21:34.253 21:35:07 -- pm/common@50 -- $ kill -TERM 86701 00:21:34.253 + [[ -n 5106 ]] 00:21:34.253 + sudo kill 5106 00:21:34.265 [Pipeline] } 00:21:34.286 [Pipeline] // timeout 00:21:34.292 [Pipeline] } 00:21:34.310 [Pipeline] // stage 00:21:34.315 [Pipeline] } 00:21:34.329 [Pipeline] // catchError 00:21:34.337 [Pipeline] stage 00:21:34.339 [Pipeline] { (Stop VM) 00:21:34.351 [Pipeline] sh 00:21:34.626 + vagrant halt 00:21:37.953 ==> default: Halting domain... 00:21:44.617 [Pipeline] sh 00:21:44.907 + vagrant destroy -f 00:21:48.192 ==> default: Removing domain... 00:21:48.205 [Pipeline] sh 00:21:48.487 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:21:48.498 [Pipeline] } 00:21:48.516 [Pipeline] // stage 00:21:48.522 [Pipeline] } 00:21:48.536 [Pipeline] // dir 00:21:48.542 [Pipeline] } 00:21:48.560 [Pipeline] // wrap 00:21:48.566 [Pipeline] } 00:21:48.582 [Pipeline] // catchError 00:21:48.592 [Pipeline] stage 00:21:48.594 [Pipeline] { (Epilogue) 00:21:48.610 [Pipeline] sh 00:21:48.888 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:54.175 [Pipeline] catchError 00:21:54.177 [Pipeline] { 00:21:54.193 [Pipeline] sh 00:21:54.474 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:54.474 Artifacts sizes are good 00:21:54.483 [Pipeline] } 00:21:54.517 [Pipeline] // catchError 00:21:54.530 [Pipeline] archiveArtifacts 00:21:54.536 Archiving artifacts 00:21:54.719 [Pipeline] cleanWs 00:21:54.731 [WS-CLEANUP] Deleting project workspace... 00:21:54.731 [WS-CLEANUP] Deferred wipeout is used... 00:21:54.736 [WS-CLEANUP] done 00:21:54.741 [Pipeline] } 00:21:54.762 [Pipeline] // stage 00:21:54.769 [Pipeline] } 00:21:54.787 [Pipeline] // node 00:21:54.793 [Pipeline] End of Pipeline 00:21:54.838 Finished: SUCCESS